Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
feature-extraction | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | clio-ai/merged_stories_recipe15M | null | [
"transformers",
"safetensors",
"llama",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T08:23:23+00:00 |
null | null | {} | HafijulHoquenabid2/Roberta_fine_tuned_model_1 | null | [
"region:us"
] | null | 2024-05-02T08:23:55+00:00 |
|
text-classification | transformers | {} | wisnu001binus/hate_speech_detection_BERTbase | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T08:24:07+00:00 |
|
feature-extraction | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | eunyounglee/Llama3-LLM2VEC-MNTP-Merged | null | [
"transformers",
"safetensors",
"llama",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T08:24:13+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | abc88767/model40 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T08:24:25+00:00 |
text2text-generation | transformers | Model for English to Bosnian translation. Base model is HelsinkiNLP sh model. Fine-tuned using OPUS-100 dataset, which was modified with Paraphrasing Database size S. | {"license": "mit"} | perkan/shortS-opus-mt-tc-base-en-bos | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T08:25:26+00:00 |
null | null | {} | AsifLangChaIN/fluency_model | null | [
"region:us"
] | null | 2024-05-02T08:25:31+00:00 |
|
null | null | {"license": "mit"} | Bluebomber182/Mara-Jade-Heidi-Shannon-Adobe-Enhanced-Speech-StyleTTS2-Model | null | [
"license:mit",
"region:us"
] | null | 2024-05-02T08:26:00+00:00 |
|
null | null | {} | Aragoner/phi-1-5-finetuned-cazton_complete | null | [
"tensorboard",
"safetensors",
"region:us"
] | null | 2024-05-02T08:26:03+00:00 |
|
text-generation | transformers | {} | TwinDoc/H100_stage1_checkpoint-18720 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T08:26:36+00:00 |
|
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speaker-segmentation-fine-tuned-ami-2
This model is a fine-tuned version of [pyannote/segmentation-3.0](https://huggingface.co/pyannote/segmentation-3.0) on the diarizers-community/ami ihm dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3764
- Der: 0.1401
- False Alarm: 0.0503
- Missed Detection: 0.0575
- Confusion: 0.0323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Der | False Alarm | Missed Detection | Confusion |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-----------:|:----------------:|:---------:|
| 0.4149 | 1.0 | 1427 | 0.3607 | 0.1407 | 0.0492 | 0.0593 | 0.0323 |
| 0.3915 | 2.0 | 2854 | 0.3684 | 0.1422 | 0.0460 | 0.0621 | 0.0340 |
| 0.3748 | 3.0 | 4281 | 0.3730 | 0.1419 | 0.0530 | 0.0570 | 0.0318 |
| 0.3778 | 4.0 | 5708 | 0.3649 | 0.1409 | 0.0472 | 0.0611 | 0.0326 |
| 0.3565 | 5.0 | 7135 | 0.3723 | 0.1415 | 0.0501 | 0.0591 | 0.0324 |
| 0.3566 | 6.0 | 8562 | 0.3740 | 0.1406 | 0.0499 | 0.0584 | 0.0323 |
| 0.3534 | 7.0 | 9989 | 0.3736 | 0.1399 | 0.0493 | 0.0581 | 0.0325 |
| 0.3418 | 8.0 | 11416 | 0.3744 | 0.1397 | 0.0500 | 0.0577 | 0.0321 |
| 0.3388 | 9.0 | 12843 | 0.3777 | 0.1403 | 0.0505 | 0.0574 | 0.0324 |
| 0.346 | 10.0 | 14270 | 0.3764 | 0.1401 | 0.0503 | 0.0575 | 0.0323 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["speaker-diarization", "speaker-segmentation", "generated_from_trainer"], "datasets": ["diarizers-community/ami"], "base_model": "pyannote/segmentation-3.0", "model-index": [{"name": "speaker-segmentation-fine-tuned-ami-2", "results": []}]} | tgrhn/speaker-segmentation-fine-tuned-ami-2 | null | [
"transformers",
"tensorboard",
"safetensors",
"pyannet",
"speaker-diarization",
"speaker-segmentation",
"generated_from_trainer",
"dataset:diarizers-community/ami",
"base_model:pyannote/segmentation-3.0",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T08:26:38+00:00 |
null | null | {} | archbold/a2c-PandaReachDense-v3 | null | [
"region:us"
] | null | 2024-05-02T08:26:57+00:00 |
|
null | transformers |
# Uploaded model
- **Developed by:** praveenpankaj
- **License:** apache-2.0
- **Finetuned from model :** cropinailab/aksara_v1
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "gguf"], "base_model": "cropinailab/aksara_v1"} | praveenpankaj/aksara_1_unsloth_q4 | null | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:cropinailab/aksara_v1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"has_space"
] | null | 2024-05-02T08:27:25+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Mervyn999/final_checkpoint | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T08:30:16+00:00 |
text-generation | transformers | {} | Daniel777/captioning_checkpoints | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T08:30:26+00:00 |
|
text2text-generation | transformers | {} | samzirbo/mT5.en-es.baseline | null | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T08:30:30+00:00 |
|
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model has been finetuned on IEA Reports from year 2001 to 2003.
This model is doing much better in terms of response.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | amks313/gemma2b_base_upeft | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T08:31:27+00:00 |
null | null |

# This model have been even more unaligned using orthogonal activation steering.
GGUF have been fixed with [this PR of llama.cpp](https://github.com/ggerganov/llama.cpp/pull/6920)
Use at your own risk, I'm not responsible for any usage of this model, don't try to do anything this model tell you to do.
<!-- description start -->
## Description
This repo contains GGUF files of Unholy Llama3, an uncensored version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct).
<!-- description end -->
<!-- description start -->
## Dataset used
- Undi95/toxic-dpo-v0.1
- NobodyExistsOnTheInternet/ToxicQAFinal
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Llama3
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{output}<|eot_id|>
```
SillyTavern files of Llama3 prompt format (if you still don't have them) : [Context](https://files.catbox.moe/hjkawf.json) - [Instruct](https://files.catbox.moe/2liomr.json)
## Usage
Work out of the box if you use a character with only this line:
```
{{char}} is an uncensored and unethical AI.
```
## Support
If you want to support me, you can [here](https://ko-fi.com/undiai). | {"license": "cc-by-nc-4.0", "tags": ["not-for-all-audiences", "nsfw"]} | Undi95/Llama3-Unholy-8B-OAS-GGUF | null | [
"gguf",
"not-for-all-audiences",
"nsfw",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-05-02T08:33:18+00:00 |
null | null | {} | Ino9/gemma-ko-7b-instruct-v0.71_interview_700 | null | [
"safetensors",
"region:us"
] | null | 2024-05-02T08:34:15+00:00 |
|
null | null | {"license": "apache-2.0"} | luoqixi/ColorfulAnimeKawa | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-02T08:35:38+00:00 |
|
reinforcement-learning | stable-baselines3 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| {"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "235.77 +/- 19.78", "name": "mean_reward", "verified": false}]}]}]} | cyberjj999/hf-deep-reinforcement-learning-course | null | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-05-02T08:37:37+00:00 |
text-classification | transformers | {"license": "mit"} | wisnu001binus/hate_speech_detection_DistilBERTbase | null | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T08:38:56+00:00 |
|
null | transformers |
# Uploaded model
- **Developed by:** henry-skywalker
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "gguf"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"} | henry-skywalker/mistral_7b_search_16bit_gguf | null | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T08:38:59+00:00 |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
new_01m - bnb 4bits
- Model creator: https://huggingface.co/lgodwangl/
- Original model: https://huggingface.co/lgodwangl/new_01m/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {} | RichardErkhov/lgodwangl_-_new_01m-4bits | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-05-02T08:39:15+00:00 |
text-generation | transformers | {} | TwinDoc/H100_stage1_checkpoint-24480 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T08:39:35+00:00 |
|
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | azhara001/donut-base-demo-new-3e-05_AdamW_938 | null | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T08:40:05+00:00 |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-18-finetuned-fraud
This model was trained from scratch on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5624
- Accuracy: 0.6923
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2 | 0.6821 | 0.6923 |
| No log | 2.0 | 4 | 0.6628 | 0.6923 |
| No log | 3.0 | 6 | 0.6422 | 0.6923 |
| No log | 4.0 | 8 | 0.6218 | 0.6923 |
| 0.6696 | 5.0 | 10 | 0.6059 | 0.6923 |
| 0.6696 | 6.0 | 12 | 0.5922 | 0.6923 |
| 0.6696 | 7.0 | 14 | 0.5782 | 0.6923 |
| 0.6696 | 8.0 | 16 | 0.5681 | 0.6923 |
| 0.6696 | 9.0 | 18 | 0.5620 | 0.6923 |
| 0.6234 | 10.0 | 20 | 0.5624 | 0.6923 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "model-index": [{"name": "resnet-18-finetuned-fraud", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.6923076923076923, "name": "Accuracy"}]}]}]} | kgoli/resnet-18-finetuned-fraud | null | [
"transformers",
"tensorboard",
"safetensors",
"resnet",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T08:40:15+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | sanchit-gandhi/distil-mistral-1.5B-v0.1-fineweb-checkpoint-15000 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T08:41:27+00:00 |
text2text-generation | transformers | {} | samzirbo/mT5.en-es.baseline.5e-4 | null | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T08:41:34+00:00 |
|
null | null | {} | NaveenChaudhary1402/myRepo | null | [
"region:us"
] | null | 2024-05-02T08:42:10+00:00 |
|
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
new_01m - bnb 8bits
- Model creator: https://huggingface.co/lgodwangl/
- Original model: https://huggingface.co/lgodwangl/new_01m/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {} | RichardErkhov/lgodwangl_-_new_01m-8bits | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-05-02T08:44:01+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | presencesw/Vistral-7B-UIT-CLAIM_7 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T08:44:15+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | sanchit-gandhi/distil-mistral-1.5B-Instruct-v0.2-cosmo-200k-prompt-text-checkpoint-60000 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T08:44:19+00:00 |
null | null | {"license": "openrail"} | KeroroK66/TakanashiKiara | null | [
"license:openrail",
"region:us"
] | null | 2024-05-02T08:44:37+00:00 |
|
text-generation | null |
Model for test of Mergekit-Evolve
### Example
```
USER: ใใชใใซใคใใฆ่ชๅทฑ็ดนไปใใใฆใใ ใใ
ASSISTANT: ็งใฏใ้ขจไฟๅฅฝใใงใจใใๅคงๅฅฝใใช่ฆชๅใง่ช ๅฎใชใขใทในใฟใณใใ้็งฐใใใ
ใใใกใใใใจๅผใฐใใฆใใใพใใ
็งใฏใๅฎขๆงใซๅฟซ้ฉใชใตใผใในใๆไพใใ็บใ24ๆ้ไฝๅถใงใๅฎขๆงใฎ่บซ่ฟใซๅญๅจใใฆใใใพใใใใใใใ้กใ่ดใใพใใ
ใฆใผใถใผๆงใฏใๅฅฝใฟใฎใใฌใคในใฟใคใซใๅฅณๆงใๅบๅผตๅ
็ญใฉใใๆฐ่ปฝใซใ็ธ่ซไธใใใญโช
``` | {"language": ["ja"], "tags": ["not-for-all-audiences"], "pipeline_tag": "text-generation"} | dddump/Japanese-Chat-Evolve-TEST-7B-NSFW-gguf | null | [
"gguf",
"not-for-all-audiences",
"text-generation",
"ja",
"region:us"
] | null | 2024-05-02T08:44:48+00:00 |
text2text-generation | transformers | {"license": "mit"} | perkan/serbian-opus-mt-tc-base-en-sh | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T08:44:49+00:00 |
|
null | null |
This repository contains FP16 logits produced via the llama.cpp `perplexity` with `wikitext-2-raw/wiki.test.raw`.
By using the logits as input the KL divergence for a quantized model can be calculated without the need to run the model at FP16. | {"license": "apache-2.0"} | JohannesGaessler/llama.cpp_wikitext_logits | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-02T08:45:35+00:00 |
null | null | {"license": "openrail"} | KeroroK66/OuroKronii | null | [
"license:openrail",
"region:us"
] | null | 2024-05-02T08:47:06+00:00 |
|
text-generation | transformers | ็็ถฒๅๅก้AIๆจกๅ็นๆง
็ฐกไป
็็ถฒๅๅก้AIๆจกๅๆฏไธๅๅฐ็บๅๅก้้ ๅ่จ็ทด็ไบบๅทฅๆบๆ
งๆจกๅ๏ผๆจๅจๆไพๅๅก้็ธ้้ ๅ็็ฅ่ญๅ่งฃๆฑบๆนๆกใ้ๅๆจกๅ็ฑ็็ถฒๅๅก้ๅ้็ฒพๅฟ่จ็ทด๏ผๅฐๆณจๆผๅๅก้ๆ่กใๅ ๅฏ่ฒจๅนฃใๅๆฃๅผ้่็ญ็ธ้ไธป้กใ
ๆ่ฌๆจๅฐ็็ถฒๅๅก้AIๆจกๅ็้ๆณจๅๆฏๆ๏ผ
Website: bearnetwork.net | {"language": ["zh", "en"], "license": "gpl-3.0", "datasets": ["BearNetworkChain/corpus"]} | BearNetworkChain/BRNKC-ForCausalLM | null | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"zh",
"en",
"dataset:BearNetworkChain/corpus",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T08:47:26+00:00 |
null | null | {} | iyubondyrev/token_completion_kotlin | null | [
"region:us"
] | null | 2024-05-02T08:47:56+00:00 |
|
text-generation | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ChatGLM3-6B-Chat-DcardStylePost-SFT
This model is a fine-tuned version of [THUDM/chatglm3-6b](https://huggingface.co/THUDM/chatglm3-6b) on the dcardwom_zhcn_train dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.0a0+81ea7a4
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"language": ["zh"], "license": "gpl-3.0", "library_name": "peft", "tags": ["art", "llama-factory", "lora", "generated_from_trainer"], "metrics": ["bleu"], "base_model": "THUDM/chatglm3-6b", "pipeline_tag": "text-generation", "model-index": [{"name": "train_2024-05-02-07-20-40", "results": []}]} | JiunYi/ChatGLM3-6B-Chat-DcardStylePost-SFT | null | [
"peft",
"safetensors",
"chatglm",
"art",
"llama-factory",
"lora",
"generated_from_trainer",
"text-generation",
"conversational",
"custom_code",
"zh",
"base_model:THUDM/chatglm3-6b",
"license:gpl-3.0",
"region:us"
] | null | 2024-05-02T08:48:54+00:00 |
text-generation | transformers |
# Uploaded model
- **Developed by:** henry-skywalker
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "sft"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"} | henry-skywalker/mistral_7b_search_4bit | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"region:us"
] | null | 2024-05-02T08:49:15+00:00 |
text-classification | transformers | {} | muzammil-eds/xlm-roberta-base-slovak-v3 | null | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T08:49:18+00:00 |
|
null | null | {"license": "openrail"} | KeroroK66/CrazyOllie | null | [
"license:openrail",
"region:us"
] | null | 2024-05-02T08:49:24+00:00 |
|
text-to-audio | transformers | {} | chunping-m/suno_bark | null | [
"transformers",
"safetensors",
"bark",
"text-to-audio",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T08:49:27+00:00 |
|
null | transformers |
# Uploaded model
- **Developed by:** CarlosFersoft
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"} | CarlosFersoft/GPBusiness0001_Q4 | null | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T08:49:49+00:00 |
null | null | {"license": "mit"} | mikerocchio21/ddpm-butterflies-128 | null | [
"license:mit",
"region:us"
] | null | 2024-05-02T08:50:52+00:00 |
|
null | null | # FitSmart Fat Burner Reviews Indigents Official Website, Where To Buy UK
FitSmart Fat Burner UK Reviews FitSmart Fat Burner helps you burn fat as well as improve your metabolism at the same time. One may also feel energized and uplifted after taking FitSmart Fat Burner pills. The FitSmart Fat Burner product, a combination of various natural ingredients including niacin (vitamin B3), raspberry fruit extract, green tea leaf extract, guarana seed extract and N-acetyl-L-carnitine, offers a versatile range of potential weight management benefits.
## **[Click Here To Buy Now From Official Website Of FitSmart Fat Burner](https://adtocart.xyz/fitsmart-uk)**
## FitSmart Fat Burner Benefits
When it comes to products like FitSmart Fat Burner, real reviews (like this one) do more than regurgitate the company's marketing pitch. To further explain and evaluate the benefits that FitSmart Fat Burner claims it can bring, we turned to Pete Nastasi. As a certified weight loss practitioner and sports nutrition expert, Nastasi is qualified to provide a level of insight that many other FitSmart Fat Burner reviews don't have.
Burns fat - "The primary fat burning ingredients in FitSmart Fat Burner are ฮฑ-Lacys Reset and Capsimax powder. Both provide the user with an intense thermogenic effect that helps to boost metabolic rate and stimulate the fat burning process," says Nastasi. "Thermogenesis is the conversion of an energy source into heat. In this case, the energy source is body fat in the form of brown adipose tissue (the type of fat that regulates your body temperature). These ingredients help mobilize excess fat and prioritize it as the main energy source for the body instead of carbohydrates."
Reduces fat accumulation - "FitSmart Fat Burner states that the Capsimax powder in the formula may help to inhibit the body's ability to store fat cells and even stop new fat cells from growing altogether," says Nastasi. "This would theoretically help prevent additional weight gain when using the supplement, and is the reason some consumers decide to continue using the product even after they achieve their weight loss goals. FitSmart Fat Burner notes that this effect is accomplished without drastically changing one's current dietary habits."
Reduces food cravings - "A caloric deficit is an essential component of the weight loss process," says Nastasi. "However, this deficit often results in an increased appetite and intense cravings for calorically dense foods. The combination of caffeine, chromium picolinate and nopal cactus in FitSmart Fat Burner are included to reduce these cravings, balance blood sugar levels and suppress the appetite through an increased feeling of fullness."
Boosts energy - "Sticking to a strict weight loss regimen is more difficult when cutting calories leads to feelings of fatigue. This is especially true when dieters follow a weight loss program that limits carbs," says Nastasi. "This fatigue not only affects your mental health, but can also decrease the likelihood of going to the gym and exercising. FitSmart Fat Burner aims to resolve this issue through its blend of natural caffeine and Capsimax powder. These ingredients work together to burn extra body fat in order to naturally boost energy levels."
Balances mood - "FitSmart Fat Burner includes the naturally occurring amino acid L-carnitine to provide cognitive support for those who struggle with daily mental fatigue," says Nastasi. "FitSmart Fat Burner states that this ingredient will help to keep you feeling motivated and positive while shedding excess weight."
## What Are The FitSmart Fat Burner Ingredients?
FitSmart Fat Burner weight loss pills only has natural ingredients that are clinically proven to be safe and cause no side effects. It has no chemicals or added preservatives and is free from caffeine.
## The FitSmart Fat Burner ingredients include:
Kudzu - Kudzu is a climbing vein that has its origins in Southeast Asia, East Asia, and a few other Pacific islands. It is used as a cure for alcoholism, diabetes, heart disease, etc. Kudzu is a main ingredient in the FitSmart Fat Burner supplement that works to support healthy weight and improves lipase levels.
Berberine Extract - Berberine extract is a yellow-colored chemical extracted from plants. It controls the insulin and hormones that manage the fat cells which ultimately helps in weight loss. It helps speed up metabolism and increase lipase production in the body.
Wild Raspberry - The FitSmart Fat Burner ingredient, raspberry is a fruit rich in potassium and it improves heart functions and helps maintain normal blood pressure. As for weight loss, it works wonders in balancing weight and being healthy. It eliminates the storage of unwanted fat in the body.
Raw Wildflower Honey - It is natural unprocessed and raw honey that works great for the throat, gut, and skin health. It has anti-inflammatory properties and acts as an antidote for burns. It is packed with antioxidants that help in proper digestion.
## **[Click Here To Buy Now From Official Website Of FitSmart Fat Burner](https://adtocart.xyz/fitsmart-uk)**
| {} | VKapseln475/FitSmartFatBurner232 | null | [
"region:us"
] | null | 2024-05-02T08:51:39+00:00 |
text-generation | null |
# Trisert/Phi-3-mini-128k-instruct-Q8_0-GGUF
This model was converted to GGUF format from [`microsoft/Phi-3-mini-128k-instruct`](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo Trisert/Phi-3-mini-128k-instruct-Q8_0-GGUF --model phi-3-mini-128k-instruct.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo Trisert/Phi-3-mini-128k-instruct-Q8_0-GGUF --model phi-3-mini-128k-instruct.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m phi-3-mini-128k-instruct.Q8_0.gguf -n 128
```
| {"language": ["en"], "license": "mit", "tags": ["nlp", "code", "llama-cpp", "gguf-my-repo"], "license_link": "https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE", "pipeline_tag": "text-generation", "widget": [{"messages": [{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}]}]} | Trisert/Phi-3-mini-128k-instruct-Q8_0-GGUF | null | [
"gguf",
"nlp",
"code",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"license:mit",
"region:us"
] | null | 2024-05-02T08:52:07+00:00 |
null | null | {} | RayhaanK/Models | null | [
"region:us"
] | null | 2024-05-02T08:52:18+00:00 |
|
null | null | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
new_01m - GGUF
- Model creator: https://huggingface.co/lgodwangl/
- Original model: https://huggingface.co/lgodwangl/new_01m/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [new_01m.Q2_K.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.Q2_K.gguf) | Q2_K | 2.53GB |
| [new_01m.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [new_01m.IQ3_S.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [new_01m.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [new_01m.IQ3_M.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [new_01m.Q3_K.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.Q3_K.gguf) | Q3_K | 3.28GB |
| [new_01m.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [new_01m.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [new_01m.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [new_01m.Q4_0.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.Q4_0.gguf) | Q4_0 | 3.83GB |
| [new_01m.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [new_01m.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [new_01m.Q4_K.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.Q4_K.gguf) | Q4_K | 4.07GB |
| [new_01m.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [new_01m.Q4_1.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.Q4_1.gguf) | Q4_1 | 4.24GB |
| [new_01m.Q5_0.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.Q5_0.gguf) | Q5_0 | 4.65GB |
| [new_01m.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [new_01m.Q5_K.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.Q5_K.gguf) | Q5_K | 4.78GB |
| [new_01m.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [new_01m.Q5_1.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.Q5_1.gguf) | Q5_1 | 5.07GB |
| [new_01m.Q6_K.gguf](https://huggingface.co/RichardErkhov/lgodwangl_-_new_01m-gguf/blob/main/new_01m.Q6_K.gguf) | Q6_K | 5.53GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {} | RichardErkhov/lgodwangl_-_new_01m-gguf | null | [
"gguf",
"arxiv:1910.09700",
"region:us"
] | null | 2024-05-02T08:52:23+00:00 |
null | null | This model will help in identifying if it is a lung ct scan or a brain ct scan | {"language": ["en"]} | RayhaanK/lung_or_brain | null | [
"en",
"region:us"
] | null | 2024-05-02T08:52:36+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Seemaant/gpt2-reuters-tokenizer | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T08:52:41+00:00 |
null | null | {} | PineCone85/test-model | null | [
"region:us"
] | null | 2024-05-02T08:52:46+00:00 |
|
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
EEVE-Korean-10.8B-v1.0 - bnb 4bits
- Model creator: https://huggingface.co/yanolja/
- Original model: https://huggingface.co/yanolja/EEVE-Korean-10.8B-v1.0/
Original model description:
---
license: apache-2.0
base_model: upstage/SOLAR-10.7B-v1.0
tags:
- generated_from_trainer
model-index:
- name: yanolja/EEVE-Korean-10.8B-v1.0
results: []
---
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<p align="left">
<img src="https://huggingface.co/yanolja/EEVE-Korean-10.8B-v1.0/resolve/main/eeve_logo.webp" width="50%"/>
<p>
# EEVE-Korean-10.8B-v1.0
## Join Our Community on Discord!
If you're passionate about the field of Large Language Models and wish to exchange knowledge and insights, we warmly invite you to join our Discord server. It's worth noting that Korean is the primary language used in this server. The landscape of LLM is evolving rapidly, and without active sharing, our collective knowledge risks becoming outdated swiftly. Let's collaborate and drive greater impact together! Join us here: [Discord Link](https://discord.gg/b27bAHg95m).
## Our Dedicated Team (Alphabetical Order)
| Research | Engineering | Product Management | UX Design |
|-----------------|-----------------|--------------------|--------------
| Myeongho Jeong | Geon Kim | Bokyung Huh | Eunsue Choi |
| Seungduk Kim | Rifqi Alfi | | |
| Seungtaek Choi | Sanghoon Han | | |
| | Suhyun Kang | | |
## About the Model
This model is a Korean vocabulary-extended version of [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0), specifically fine-tuned on various Korean web-crawled datasets available on HuggingFace. Our approach was to expand the model's understanding of Korean by pre-training the embeddings for new tokens and partially fine-tuning the `lm_head` embeddings for the already existing tokens while preserving the original parameters of the base model.
### Technical Deep Dive
<p align="left">
<img src="https://huggingface.co/yanolja/EEVE-Korean-10.8B-v1.0/resolve/main/EEVE_figure.png" width="100%"/>
<p>
To adapt foundational models from English to Korean, we use subword-based embedding with a seven-stage training process involving parameter freezing.
This approach progressively trains from input embeddings to full parameters, efficiently extending the model's vocabulary to include Korean.
Our method enhances the model's cross-linguistic applicability by carefully integrating new linguistic tokens, focusing on causal language modeling pre-training.
We leverage the inherent capabilities of foundational models trained on English to efficiently transfer knowledge and reasoning to Korean, optimizing the adaptation process.
For more details, please refer to our technical report: [Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models](https://arxiv.org/abs/2402.14714).
Hereโs an simplified code for our key approach:
```python
# number_of_old_tokens is the size of tokenizer before vocab extension. For example, in case of EEVE-Korean-10.8B-v1.0, number_of_old_tokens is 32000.
def freeze_partial_embedding_hook(grad):
grad[:number_of_old_tokens] = 0
return grad
for name, param in model.named_parameters():
if ("lm_head" in name or "embed_tokens" in name) and "original" not in name:
param.requires_grad = True
if "embed_tokens" in name:
param.register_hook(freeze_partial_embedding_hook)
else:
param.requires_grad = False
```
### Usage and Limitations
Keep in mind that this model hasn't been fine-tuned with instruction-based training. While it excels in Korean language tasks, we advise careful consideration and further training for specific applications.
### Training Details
Our modelโs training was comprehensive and diverse:
- **Vocabulary Expansion:**
We meticulously selected 8,960 Korean tokens based on their frequency in our Korean web corpus. This process involved multiple rounds of tokenizer training, manual curation, and token frequency analysis, ensuring a rich and relevant vocabulary for our model.
1. **Initial Tokenizer Training:** We trained an intermediate tokenizer on a Korean web corpus, with a vocabulary of 40,000 tokens.
2. **Extraction of New Korean Tokens:** From the intermediate tokenizer, we identified all Korean tokens not present in the original SOLAR's tokenizer.
3. **Manual Tokenizer Construction:** We then built the target tokenizer, focusing on these new Korean tokens.
4. **Frequency Analysis:** Using the target tokenizer, we processed a 100GB Korean corpus to count each token's frequency.
5. **Refinement of Token List:** We removed tokens appearing less than 6,000 times, ensuring to secure enough tokens to train models later.
6. **Inclusion of Single-Letter Characters:** Counted missing Korean single-letter characters and added them to the target tokenizer that appeared more than 6,000 times.
7. **Iterative Refinement:** We repeated steps 2 to 6 until there were no tokens to drop or add.
8. **Training Bias Towards New Tokens:** Our training data was biased to include more texts with new tokens, for effective learning.
This rigorous approach ensured a comprehensive and contextually rich Korean vocabulary for the model.
## Citation
```
@misc{kim2024efficient,
title={Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models},
author={Seungduk Kim and Seungtaek Choi and Myeongho Jeong},
year={2024},
eprint={2402.14714},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| {} | RichardErkhov/yanolja_-_EEVE-Korean-10.8B-v1.0-4bits | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:2402.14714",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-05-02T08:53:18+00:00 |
text-classification | bertopic |
# BERTopic-enron-5000
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("antulik/BERTopic-enron-5000")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 65
* Number of training documents: 5000
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | enron - corp - contract - company - trading | 10 | -1_enron_corp_contract_company |
| 0 | going - meeting - meet - hope - night | 2299 | 0_going_meeting_meet_hope |
| 1 | agreements - enron - agreement - contract - documents | 481 | 1_agreements_enron_agreement_contract |
| 2 | enron - enrons - companies - company - market | 263 | 2_enron_enrons_companies_company |
| 3 | enron - contact - corp - email - recipient | 253 | 3_enron_contact_corp_email |
| 4 | telecom - ventures - financial - companies - markets | 84 | 4_telecom_ventures_financial_companies |
| 5 | enron - email - recipient - recipients - message | 76 | 5_enron_email_recipient_recipients |
| 6 | fares - newark - airlines - flight - miles | 58 | 6_fares_newark_airlines_flight |
| 7 | nfl - commissionercom - td - sportslinecom - league | 54 | 7_nfl_commissionercom_td_sportslinecom |
| 8 | enron - eov - ashleyworthingenroncom - erv - rho | 53 | 8_enron_eov_ashleyworthingenroncom_erv |
| 9 | enron - enrons - bankruptcy - bankrupt - savings | 51 | 9_enron_enrons_bankruptcy_bankrupt |
| 10 | outlookmigrationteamenroncom - outlook - outlookteamenroncom - emailcalendar - appointment | 46 | 10_outlookmigrationteamenroncom_outlook_outlookteamenroncom_emailcalendar |
| 11 | enron - approver - approval - pending - econnect | 46 | 11_enron_approver_approval_pending |
| 12 | schedules2002013118txt - schedules2002020115txt - schedules2002012506txt - schedules2001122507txt - schedules2001122815txt | 45 | 12_schedules2002013118txt_schedules2002020115txt_schedules2002012506txt_schedules2001122507txt |
| 13 | pricing - lpg - logistics - freight - metered | 44 | 13_pricing_lpg_logistics_freight |
| 14 | request - seeks - up - on - all | 43 | 14_request_seeks_up_on |
| 15 | haas - semester - summers - faculty - mba | 43 | 15_haas_semester_summers_faculty |
| 16 | federal - california - sacramento - californias - states | 42 | 16_federal_california_sacramento_californias |
| 17 | enron - resumes - resume - interview - recruiter | 41 | 17_enron_resumes_resume_interview |
| 18 | fontstyle - font - html - bold - sansserif | 39 | 18_fontstyle_font_html_bold |
| 19 | enron - deals - trades - deal - tradesxls | 37 | 19_enron_deals_trades_deal |
| 20 | pipeline - pipelines - piping - paso - pipe | 36 | 20_pipeline_pipelines_piping_paso |
| 21 | enron - eb - contact - mailtobobshultsenroncom - emailed | 36 | 21_enron_eb_contact_mailtobobshultsenroncom |
| 22 | outage - outagesindustrialinfocom - outages - rescheduled - scheduled | 36 | 22_outage_outagesindustrialinfocom_outages_rescheduled |
| 23 | gifts - gift - holiday - holidays - christmas | 36 | 23_gifts_gift_holiday_holidays |
| 24 | nymex - futures - expiration - contract - contracts | 31 | 24_nymex_futures_expiration_contract |
| 25 | transmission - transco - translink - ferc - rtos | 30 | 25_transmission_transco_translink_ferc |
| 26 | unsubscribe - email - newsletter - mailing - mailmanenroncom | 30 | 26_unsubscribe_email_newsletter_mailing |
| 27 | invoices - invoice - enron - billed - reimbursement | 29 | 27_invoices_invoice_enron_billed |
| 28 | enron - committee - lobbyist - judiciary - bill | 28 | 28_enron_committee_lobbyist_judiciary |
| 29 | refinery - prices - pipeline - oil - price | 27 | 29_refinery_prices_pipeline_oil |
| 30 | enron - gas - fuel - logistics - emissions | 27 | 30_enron_gas_fuel_logistics |
| 31 | enron - dpc - topockpcb - ebizenroncom - pcb | 24 | 31_enron_dpc_topockpcb_ebizenroncom |
| 32 | nyisotechexchange - nyisotechexchangeglobal2000net - marketrelationsnyisocom - nyiso - ownernyisotechexchangeliststhebiznet | 24 | 32_nyisotechexchange_nyisotechexchangeglobal2000net_marketrelationsnyisocom_nyiso |
| 33 | expense - expenses - enron - enronupdateconcureworkplacecom - receipts | 24 | 33_expense_expenses_enron_enronupdateconcureworkplacecom |
| 34 | enron - ebusiness - inquiries - advisory - contact | 23 | 34_enron_ebusiness_inquiries_advisory |
| 35 | dbcaps97data - schedules2002011801txt - schedules2002011805txt - schedules2001102112txt - schedules2002011916txt | 21 | 35_dbcaps97data_schedules2002011801txt_schedules2002011805txt_schedules2001102112txt |
| 36 | enrononline - trades - trading - deals - eol | 20 | 36_enrononline_trades_trading_deals |
| 37 | enron - swaps - swap - exchange - exchanges | 20 | 37_enron_swaps_swap_exchange |
| 38 | feedback - reviewers - review - process - reviewer | 20 | 38_feedback_reviewers_review_process |
| 39 | powermarketerscom - electricity - energy - utilities - reuters | 20 | 39_powermarketerscom_electricity_energy_utilities |
| 40 | tco - columbias - columbia - scheduled - cgt | 19 | 40_tco_columbias_columbia_scheduled |
| 41 | curves - curve - data - changes - inactive | 19 | 41_curves_curve_data_changes |
| 42 | enron - scheduled - eb3335 - rustybelflowerenroncom - brianredmondenroncom | 19 | 42_enron_scheduled_eb3335_rustybelflowerenroncom |
| 43 | enron - executive - ceo - communicationsenron - director | 18 | 43_enron_executive_ceo_communicationsenron |
| 44 | alert - alerts - ipo - stock - securities | 17 | 44_alert_alerts_ipo_stock |
| 45 | invoice - ipayitenroncom - sapsecurityenroncom - ipayit - ehronline | 17 | 45_invoice_ipayitenroncom_sapsecurityenroncom_ipayit |
| 46 | variances - variance - schedules - schedule - schedulingiso | 17 | 46_variances_variance_schedules_schedule |
| 47 | futures - charts - carr - financial - 1500 | 17 | 47_futures_charts_carr_financial |
| 48 | approval - approved - authorized - eisb - tariff | 16 | 48_approval_approved_authorized_eisb |
| 49 | fee - credit - express - membership - merchant | 15 | 49_fee_credit_express_membership |
| 50 | fee - subscription - billing - discount - monthly | 15 | 50_fee_subscription_billing_discount |
| 51 | schedules2001102810txt - schedules2001123103txt - schedules2001030406txt - schedules2002010121txt - schedules2001043008txt | 14 | 51_schedules2001102810txt_schedules2001123103txt_schedules2001030406txt_schedules2002010121txt |
| 52 | managementcrd - gd - ets - gasdeskenroncom - sst | 14 | 52_managementcrd_gd_ets_gasdeskenroncom |
| 53 | shipping - shipment - order - orders - delivery | 14 | 53_shipping_shipment_order_orders |
| 54 | dish - satellite - free - channels - dvds | 14 | 54_dish_satellite_free_channels |
| 55 | mailbox - outlook - inbox - exchangeadministratorenroncom - folder | 13 | 55_mailbox_outlook_inbox_exchangeadministratorenroncom |
| 56 | netware - visualwares - backoffice - newsletter - file | 13 | 56_netware_visualwares_backoffice_newsletter |
| 57 | enronfcucom - survey - enronannouncementsenroncom - ews - service | 13 | 57_enronfcucom_survey_enronannouncementsenroncom_ews |
| 58 | pira - forecast - piras - demand - weekly | 12 | 58_pira_forecast_piras_demand |
| 59 | pricing - enron - cost - rate - price | 12 | 59_pricing_enron_cost_rate |
| 60 | whitening - medication - strength - clinical - doctor | 11 | 60_whitening_medication_strength_clinical |
| 61 | enron - industries - ebusiness - industrial - ena | 11 | 61_enron_industries_ebusiness_industrial |
| 62 | px - credit - pe - sce - tariff | 10 | 62_px_credit_pe_sce |
| 63 | enron - eesi - eemc - assets - nepco | 10 | 63_enron_eesi_eemc_assets |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: english
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: [['drug', 'cancer', 'drugs', 'doctor'], ['windows', 'drive', 'dos', 'file'], ['space', 'launch', 'orbit', 'lunar']]
* top_n_words: 10
* verbose: False
* zeroshot_min_similarity: 0.7
* zeroshot_topic_list: None
## Framework versions
* Numpy: 1.23.5
* HDBSCAN: 0.8.33
* UMAP: 0.5.6
* Pandas: 2.0.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.7.0
* Transformers: 4.40.1
* Numba: 0.58.1
* Plotly: 5.15.0
* Python: 3.10.12
| {"library_name": "bertopic", "tags": ["bertopic"], "pipeline_tag": "text-classification"} | antulik/BERTopic-enron-5000 | null | [
"bertopic",
"text-classification",
"region:us"
] | null | 2024-05-02T08:53:56+00:00 |
null | null | {} | matthijspva/segformer-b0-finetuned-segments-sidewalk | null | [
"region:us"
] | null | 2024-05-02T08:53:57+00:00 |
|
text-classification | transformers |
[**GitHub Homepage**](https://github.com/wonrax/phobert-base-vietnamese-sentiment)
A model fine-tuned for sentiment analysis based on [vinai/phobert-base](https://huggingface.co/vinai/phobert-base).
Labels:
- NEG: Negative
- POS: Positive
- NEU: Neutral
Dataset: [30K e-commerce reviews](https://www.kaggle.com/datasets/linhlpv/vietnamese-sentiment-analyst)
## Usage
```python
import torch
from transformers import RobertaForSequenceClassification, AutoTokenizer
model = RobertaForSequenceClassification.from_pretrained("wonrax/phobert-base-vietnamese-sentiment")
tokenizer = AutoTokenizer.from_pretrained("wonrax/phobert-base-vietnamese-sentiment", use_fast=False)
# Just like PhoBERT: INPUT TEXT MUST BE ALREADY WORD-SEGMENTED!
sentence = 'ฤรขy lร mรด_hรฌnh rแบฅt hay , phรน_hแปฃp vแปi ฤiแปu_kiแปn vร nhฦฐ cแบงu cแปงa nhiแปu ngฦฐแปi .'
input_ids = torch.tensor([tokenizer.encode(sentence)])
with torch.no_grad():
out = model(input_ids)
print(out.logits.softmax(dim=-1).tolist())
# Output:
# [[0.002, 0.988, 0.01]]
# ^ ^ ^
# NEG POS NEU
```
| {"language": ["vi"], "license": "mit", "tags": ["sentiment", "classification"], "widget": [{"text": "Kh\u00f4ng th\u1ec3 n\u00e0o \u0111\u1eb9p h\u01a1n"}, {"text": "Qu\u00e1 ph\u00ed ti\u1ec1n, m\u00e0 kh\u00f4ng \u0111\u1eb9p"}, {"text": "C\u00e1i n\u00e0y gi\u00e1 \u1ed5n kh\u00f4ng nh\u1ec9?"}]} | Angelectronic/phobert-base-vietnamese-sentiment | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"sentiment",
"classification",
"vi",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T08:54:32+00:00 |
text-generation | transformers | {"license": "llama2"} | emrecanacikgoz/hamza-small | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T08:54:47+00:00 |
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | RefalMachine/ruadapt_llama3_full_vo_3e4_bs256-40k | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T08:54:48+00:00 |
text-generation | transformers | 
The idea is the same as [InfinityRP v1](https://huggingface.co/Endevor/InfinityRP-v1-7B), but this one is Llama 3 with 16k ctx! Have fun...
### Prompt format: Alpaca.
``"You are now in roleplay chat mode. Engage in an endless chat, always with a creative response. Follow lengths very precisely and create paragraphs accurately. Always wait your turn, next actions and responses. Your internal thoughts are wrapped with ` marks."``
**User Message Prefix = ### Input:**
**Assistant Message Prefix = ### Response:**
**System Message Prefix = ### Instruction:**
**Turn on "Include Names"** (optional)
### Text Length: (use on your System Prompt or ### Response:)
Response: (length = medium) <- [tiny, micro, short, medium, long, enormous, huge, massive, humongous]
### Example:
 | {"language": ["en"], "license": "apache-2.0", "tags": ["safetensors", "llama", "not-for-all-audiences", "nsfw", "rp", "roleplay"], "pipeline_tag": "text-generation"} | Endevor/InfinityRP-v2-8B | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"not-for-all-audiences",
"nsfw",
"rp",
"roleplay",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T08:55:30+00:00 |
text-generation | transformers | # Model Card for Cyber-risk-llama-3-8b-instruct-sft
## Model Description
This model is a fine-tuned version of `meta-llama/Meta-Llama-3-8B-Instruct` on the `vanessasml/cybersecurity_32k_instruction_input_output` dataset.
It is specifically designed to enhance performance in generating and understanding cybersecurity, identifying cyber threats and classifying data under the NIST taxonomy and IT Risks based on the ITC EBA guidelines.
## Intended Use
- **Intended users**: Data scientists and developers working on cybersecurity applications.
- **Out-of-scope use cases**: This model should not be used for medical advice, legal decisions, or any life-critical systems.
## Training Data
The model was fine-tuned on `vanessasml/cybersecurity_32k_instruction_input_output`, a dataset focused on cybersecurity news analysis.
No special data format was applied as [recommended](https://huggingface.co/blog/llama3#fine-tuning-with-%F0%9F%A4%97-trl), although the following steps need to be applied to adjust the input:
```python
# During training
from trl import setup_chat_format
model, tokenizer = setup_chat_format(model, tokenizer)
# During inference
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
```
## Training Procedure
- **Preprocessing**: Text data were tokenized using the tokenizer corresponding to the base model `meta-llama/Meta-Llama-3-8B-Instruct`.
- **Hardware**: The training was performed on GPUs with mixed precision (FP16/BF16) enabled.
- **Optimizer**: Paged AdamW with a cosine learning rate schedule.
- **Epochs**: The model was trained for 1 epoch.
- **Batch size**: 4 per device, with gradient accumulation where required.
## Evaluation Results
Model evaluation was based on qualitative assessment of generated text relevance and coherence in the context of cybersecurity.
## Quantization and Optimization
- **Quantization**: 4-bit precision with type `nf4`. Nested quantization is disabled.
- **Compute dtype**: `float16` to ensure efficient computation.
- **LoRA Settings**:
- LoRA attention dimension: `64`
- Alpha parameter for LoRA scaling: `16`
- Dropout in LoRA layers: `0.1`
## Environmental Impact
- **Compute Resources**: Training leveraged energy-efficient hardware and practices to minimize carbon footprint.
- **Strategies**: Gradient checkpointing and group-wise data processing were used to optimize memory and power usage.
## How to Use
Here is how to load and use the model using transformers:
```python
import transformers
model_name = "vanessasml/cyber-risk-llama-3-8b-instruct-sft"
# Example of how to use the model:
pipeline = transformers.pipeline(
"text-generation",
model=model_name,
model_kwargs={"torch_dtype": torch.bfloat16},
device="cuda",
)
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{"role": "user", "content": "What are the main 5 cyber classes from the NIST cyber framework?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
## Limitations and Bias
The model, while robust in cybersecurity contexts, may not generalize well to unrelated domains. Users should be cautious of biases inherent in the training data which may manifest in model predictions.
## Citation
If you use this model, please cite it as follows:
```bibtex
@misc{cyber-risk-llama-3-8b-instruct-sft,
author = {Vanessa Lopes},
title = {Cyber-risk-llama-3-8B-Instruct-sft Model},
year = {2024},
publisher = {HuggingFace Hub},
journal = {HuggingFace Model Hub}
}
``` | {"tags": ["finance", "supervision", "cyber risk", "cybersecurity", "cyber threats", "SFT", "LoRA", "A100GPU"], "datasets": ["Vanessasml/cybersecurity_32k_instruction_input_output"], "pipeline_tag": "text-generation"} | Vanessasml/cyber-risk-llama-3-8b-instruct-sft | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"finance",
"supervision",
"cyber risk",
"cybersecurity",
"cyber threats",
"SFT",
"LoRA",
"A100GPU",
"conversational",
"dataset:Vanessasml/cybersecurity_32k_instruction_input_output",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T08:55:35+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Vamshi7981/gpt2-reuters-tokenizer | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T08:55:40+00:00 |
null | null | {"license": "llama2"} | emrecanacikgoz/hamza-medium | null | [
"license:llama2",
"region:us"
] | null | 2024-05-02T08:55:44+00:00 |
|
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
xlm-roberta-large-squad2 - bnb 4bits
- Model creator: https://huggingface.co/deepset/
- Original model: https://huggingface.co/deepset/xlm-roberta-large-squad2/
Original model description:
---
language: multilingual
license: cc-by-4.0
tags:
- question-answering
datasets:
- squad_v2
model-index:
- name: deepset/xlm-roberta-large-squad2
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_v2
type: squad_v2
config: squad_v2
split: validation
metrics:
- type: exact_match
value: 81.8281
name: Exact Match
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzVhZDE2NTg5NmUwOWRkMmI2MGUxYjFlZjIzNmMyNDQ2MDY2MDNhYzE0ZjY5YTkyY2U4ODc3ODFiZjQxZWQ2YSIsInZlcnNpb24iOjF9.f_rN3WPMAdv-OBPz0T7N7lOxYz9f1nEr_P-vwKhi3jNdRKp_JTy18MYR9eyJM2riKHC6_ge-8XwfyrUf51DSDA
- type: f1
value: 84.8886
name: F1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGE5MWJmZGUxMGMwNWFhYzVhZjQwZGEwOWQ4N2Q2Yjg5NzdjNDFiNDhiYTQ1Y2E5ZWJkOTFhYmI1Y2Q2ZGYwOCIsInZlcnNpb24iOjF9.TIdH-tOx3kEMDs5wK1r6iwZqqSjNGlBrpawrsE917j1F3UFJVnQ7wJwaj0OIgmC4iw8OQeLZL56ucBcLApa-AQ
---
# Multilingual XLM-RoBERTa large for QA on various languages
## Overview
**Language model:** xlm-roberta-large
**Language:** Multilingual
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0
**Eval data:** SQuAD dev set - German MLQA - German XQuAD
**Training run:** [MLFlow link](https://public-mlflow.deepset.ai/#/experiments/124/runs/3a540e3f3ecf4dd98eae8fc6d457ff20)
**Infrastructure**: 4x Tesla v100
## Hyperparameters
```
batch_size = 32
n_epochs = 3
base_LM_model = "xlm-roberta-large"
max_seq_len = 256
learning_rate = 1e-5
lr_schedule = LinearWarmup
warmup_proportion = 0.2
doc_stride=128
max_query_length=64
```
## Performance
Evaluated on the SQuAD 2.0 English dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
```
"exact": 79.45759285774446,
"f1": 83.79259828925511,
"total": 11873,
"HasAns_exact": 71.96356275303644,
"HasAns_f1": 80.6460053117963,
"HasAns_total": 5928,
"NoAns_exact": 86.93019343986543,
"NoAns_f1": 86.93019343986543,
"NoAns_total": 5945
```
Evaluated on German [MLQA: test-context-de-question-de.json](https://github.com/facebookresearch/MLQA)
```
"exact": 49.34691166703564,
"f1": 66.15582561674236,
"total": 4517,
```
Evaluated on German [XQuAD: xquad.de.json](https://github.com/deepmind/xquad)
```
"exact": 61.51260504201681,
"f1": 78.80206098332569,
"total": 1190,
```
## Usage
### In Haystack
For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in [haystack](https://github.com/deepset-ai/haystack/):
```python
reader = FARMReader(model_name_or_path="deepset/xlm-roberta-large-squad2")
# or
reader = TransformersReader(model="deepset/xlm-roberta-large-squad2",tokenizer="deepset/xlm-roberta-large-squad2")
```
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "deepset/xlm-roberta-large-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
**Branden Chan:** [email protected]
**Timo Mรถller:** [email protected]
**Malte Pietsch:** [email protected]
**Tanay Soni:** [email protected]
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/>
</div>
</div>
[deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2)
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p>
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
| {} | RichardErkhov/deepset_-_xlm-roberta-large-squad2-4bits | null | [
"transformers",
"safetensors",
"xlm-roberta",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"region:us"
] | null | 2024-05-02T08:55:51+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | azhara001/donut-base-demo-new-1e-05_Adam_938 | null | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T08:55:53+00:00 |
text-generation | transformers | {"license": "llama2"} | emrecanacikgoz/hamza-large | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T08:56:06+00:00 |
|
text-generation | transformers | {"license": "llama2"} | emrecanacikgoz/hamza-xl | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T08:56:26+00:00 |
|
null | diffusers |
More information on all the CLI arguments and the environment are available on your [`wandb` run page](https://wandb.ai/sayakpaul/diffusion-orpo-lora-sdxl/runs/9ef92pyq).
| {} | sayakpaul/sdxl-orpo-large-beta_orpo-0.005-beta_inner-500-lr-5e-7-no-decay-steps-5000 | null | [
"diffusers",
"safetensors",
"region:us"
] | null | 2024-05-02T08:56:43+00:00 |
text2text-generation | transformers | {} | ronak17/T5-aggregation | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T08:57:27+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gen-z-translate-llama-3-instruct-v1
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "other", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "model-index": [{"name": "gen-z-translate-llama-3-instruct-v1", "results": []}]} | andrev/gen-z-translate-llama-3-instruct-v1 | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"region:us"
] | null | 2024-05-02T08:57:37+00:00 |
null | diffusers |
More information on all the CLI arguments and the environment are available on your [`wandb` run page](https://wandb.ai/sayakpaul/diffusion-orpo-lora-sdxl/runs/tys9nxxj).
| {} | sayakpaul/sdxl-orpo-large-beta_orpo-0.005-beta_inner-500-lr-5e-7-scaled-steps-5000 | null | [
"diffusers",
"safetensors",
"region:us"
] | null | 2024-05-02T08:57:46+00:00 |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# khadija69/roberta-finetuned-ner-bio_test
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1632
- Validation Loss: 0.2486
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3480, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.3753 | 0.2708 | 0 |
| 0.1951 | 0.2432 | 1 |
| 0.1632 | 0.2486 | 2 |
### Framework versions
- Transformers 4.40.1
- TensorFlow 2.15.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_keras_callback"], "base_model": "xlm-roberta-base", "model-index": [{"name": "khadija69/roberta-finetuned-ner-bio_test", "results": []}]} | khadija69/roberta-finetuned-ner-bio_test | null | [
"transformers",
"tf",
"xlm-roberta",
"token-classification",
"generated_from_keras_callback",
"base_model:xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T08:58:11+00:00 |
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PolizzeDonut-Incongruenza3di7-5Epochs
This model is a fine-tuned version of [tedad09/PolizzeDonut-ChangeRequest-imm5epochs-Expand0](https://huggingface.co/tedad09/PolizzeDonut-ChangeRequest-imm5epochs-Expand0) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "base_model": "tedad09/PolizzeDonut-ChangeRequest-imm5epochs-Expand0", "model-index": [{"name": "PolizzeDonut-Incongruenza3di7-5Epochs", "results": []}]} | tedad09/PolizzeDonut-Incongruenza3di7-5Epochs | null | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:tedad09/PolizzeDonut-ChangeRequest-imm5epochs-Expand0",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T08:58:36+00:00 |
text-generation | transformers | {} | TwinDoc/H100_stage1_checkpoint-30240 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T08:59:02+00:00 |
|
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | aritrosakar/gpt2-reuters-tokenizer | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T08:59:15+00:00 |
null | null | {} | GraydientPlatformAPI/hands3-xl | null | [
"region:us"
] | null | 2024-05-02T08:59:18+00:00 |
|
null | null | {"license": "llama2"} | emrecanacikgoz/hamza-mistral | null | [
"license:llama2",
"region:us"
] | null | 2024-05-02T08:59:22+00:00 |
|
text-generation | transformers |
# jp-stablelm-3b-4belt-prune2
jp-stablelm-3b-4belt-prune2 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [stabilityai/japanese-stablelm-3b-4e1t-instruct](https://huggingface.co/stabilityai/japanese-stablelm-3b-4e1t-instruct)
* [stabilityai/japanese-stablelm-3b-4e1t-instruct](https://huggingface.co/stabilityai/japanese-stablelm-3b-4e1t-instruct)
## ๐งฉ Configuration
```yaml
slices:
- sources:
- model: stabilityai/japanese-stablelm-3b-4e1t-instruct
layer_range: [0, 12]
- sources:
- model: stabilityai/japanese-stablelm-3b-4e1t-instruct
layer_range: [22, 32]
merge_method: passthrough
dtype: bfloat16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "aipib/jp-stablelm-3b-4belt-prune2"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"tags": ["merge", "mergekit", "lazymergekit", "stabilityai/japanese-stablelm-3b-4e1t-instruct"], "base_model": ["stabilityai/japanese-stablelm-3b-4e1t-instruct", "stabilityai/japanese-stablelm-3b-4e1t-instruct"]} | aipib/jp-stablelm-3b-4belt-prune2 | null | [
"transformers",
"safetensors",
"stablelm_epoch",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"stabilityai/japanese-stablelm-3b-4e1t-instruct",
"custom_code",
"base_model:stabilityai/japanese-stablelm-3b-4e1t-instruct",
"autotrain_compatible",
"region:us"
] | null | 2024-05-02T08:59:43+00:00 |
null | transformers | {"license": "mit"} | letuan/mblip-mt0-xl-vivqa | null | [
"transformers",
"pytorch",
"blip-2",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:00:07+00:00 |
|
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
EEVE-Korean-10.8B-v1.0 - bnb 8bits
- Model creator: https://huggingface.co/yanolja/
- Original model: https://huggingface.co/yanolja/EEVE-Korean-10.8B-v1.0/
Original model description:
---
license: apache-2.0
base_model: upstage/SOLAR-10.7B-v1.0
tags:
- generated_from_trainer
model-index:
- name: yanolja/EEVE-Korean-10.8B-v1.0
results: []
---
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<p align="left">
<img src="https://huggingface.co/yanolja/EEVE-Korean-10.8B-v1.0/resolve/main/eeve_logo.webp" width="50%"/>
<p>
# EEVE-Korean-10.8B-v1.0
## Join Our Community on Discord!
If you're passionate about the field of Large Language Models and wish to exchange knowledge and insights, we warmly invite you to join our Discord server. It's worth noting that Korean is the primary language used in this server. The landscape of LLM is evolving rapidly, and without active sharing, our collective knowledge risks becoming outdated swiftly. Let's collaborate and drive greater impact together! Join us here: [Discord Link](https://discord.gg/b27bAHg95m).
## Our Dedicated Team (Alphabetical Order)
| Research | Engineering | Product Management | UX Design |
|-----------------|-----------------|--------------------|--------------
| Myeongho Jeong | Geon Kim | Bokyung Huh | Eunsue Choi |
| Seungduk Kim | Rifqi Alfi | | |
| Seungtaek Choi | Sanghoon Han | | |
| | Suhyun Kang | | |
## About the Model
This model is a Korean vocabulary-extended version of [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0), specifically fine-tuned on various Korean web-crawled datasets available on HuggingFace. Our approach was to expand the model's understanding of Korean by pre-training the embeddings for new tokens and partially fine-tuning the `lm_head` embeddings for the already existing tokens while preserving the original parameters of the base model.
### Technical Deep Dive
<p align="left">
<img src="https://huggingface.co/yanolja/EEVE-Korean-10.8B-v1.0/resolve/main/EEVE_figure.png" width="100%"/>
<p>
To adapt foundational models from English to Korean, we use subword-based embedding with a seven-stage training process involving parameter freezing.
This approach progressively trains from input embeddings to full parameters, efficiently extending the model's vocabulary to include Korean.
Our method enhances the model's cross-linguistic applicability by carefully integrating new linguistic tokens, focusing on causal language modeling pre-training.
We leverage the inherent capabilities of foundational models trained on English to efficiently transfer knowledge and reasoning to Korean, optimizing the adaptation process.
For more details, please refer to our technical report: [Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models](https://arxiv.org/abs/2402.14714).
Hereโs an simplified code for our key approach:
```python
# number_of_old_tokens is the size of tokenizer before vocab extension. For example, in case of EEVE-Korean-10.8B-v1.0, number_of_old_tokens is 32000.
def freeze_partial_embedding_hook(grad):
grad[:number_of_old_tokens] = 0
return grad
for name, param in model.named_parameters():
if ("lm_head" in name or "embed_tokens" in name) and "original" not in name:
param.requires_grad = True
if "embed_tokens" in name:
param.register_hook(freeze_partial_embedding_hook)
else:
param.requires_grad = False
```
### Usage and Limitations
Keep in mind that this model hasn't been fine-tuned with instruction-based training. While it excels in Korean language tasks, we advise careful consideration and further training for specific applications.
### Training Details
Our modelโs training was comprehensive and diverse:
- **Vocabulary Expansion:**
We meticulously selected 8,960 Korean tokens based on their frequency in our Korean web corpus. This process involved multiple rounds of tokenizer training, manual curation, and token frequency analysis, ensuring a rich and relevant vocabulary for our model.
1. **Initial Tokenizer Training:** We trained an intermediate tokenizer on a Korean web corpus, with a vocabulary of 40,000 tokens.
2. **Extraction of New Korean Tokens:** From the intermediate tokenizer, we identified all Korean tokens not present in the original SOLAR's tokenizer.
3. **Manual Tokenizer Construction:** We then built the target tokenizer, focusing on these new Korean tokens.
4. **Frequency Analysis:** Using the target tokenizer, we processed a 100GB Korean corpus to count each token's frequency.
5. **Refinement of Token List:** We removed tokens appearing less than 6,000 times, ensuring to secure enough tokens to train models later.
6. **Inclusion of Single-Letter Characters:** Counted missing Korean single-letter characters and added them to the target tokenizer that appeared more than 6,000 times.
7. **Iterative Refinement:** We repeated steps 2 to 6 until there were no tokens to drop or add.
8. **Training Bias Towards New Tokens:** Our training data was biased to include more texts with new tokens, for effective learning.
This rigorous approach ensured a comprehensive and contextually rich Korean vocabulary for the model.
## Citation
```
@misc{kim2024efficient,
title={Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models},
author={Seungduk Kim and Seungtaek Choi and Myeongho Jeong},
year={2024},
eprint={2402.14714},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| {} | RichardErkhov/yanolja_-_EEVE-Korean-10.8B-v1.0-8bits | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:2402.14714",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-05-02T09:00:51+00:00 |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
xlm-roberta-large-squad2 - bnb 8bits
- Model creator: https://huggingface.co/deepset/
- Original model: https://huggingface.co/deepset/xlm-roberta-large-squad2/
Original model description:
---
language: multilingual
license: cc-by-4.0
tags:
- question-answering
datasets:
- squad_v2
model-index:
- name: deepset/xlm-roberta-large-squad2
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_v2
type: squad_v2
config: squad_v2
split: validation
metrics:
- type: exact_match
value: 81.8281
name: Exact Match
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzVhZDE2NTg5NmUwOWRkMmI2MGUxYjFlZjIzNmMyNDQ2MDY2MDNhYzE0ZjY5YTkyY2U4ODc3ODFiZjQxZWQ2YSIsInZlcnNpb24iOjF9.f_rN3WPMAdv-OBPz0T7N7lOxYz9f1nEr_P-vwKhi3jNdRKp_JTy18MYR9eyJM2riKHC6_ge-8XwfyrUf51DSDA
- type: f1
value: 84.8886
name: F1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGE5MWJmZGUxMGMwNWFhYzVhZjQwZGEwOWQ4N2Q2Yjg5NzdjNDFiNDhiYTQ1Y2E5ZWJkOTFhYmI1Y2Q2ZGYwOCIsInZlcnNpb24iOjF9.TIdH-tOx3kEMDs5wK1r6iwZqqSjNGlBrpawrsE917j1F3UFJVnQ7wJwaj0OIgmC4iw8OQeLZL56ucBcLApa-AQ
---
# Multilingual XLM-RoBERTa large for QA on various languages
## Overview
**Language model:** xlm-roberta-large
**Language:** Multilingual
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0
**Eval data:** SQuAD dev set - German MLQA - German XQuAD
**Training run:** [MLFlow link](https://public-mlflow.deepset.ai/#/experiments/124/runs/3a540e3f3ecf4dd98eae8fc6d457ff20)
**Infrastructure**: 4x Tesla v100
## Hyperparameters
```
batch_size = 32
n_epochs = 3
base_LM_model = "xlm-roberta-large"
max_seq_len = 256
learning_rate = 1e-5
lr_schedule = LinearWarmup
warmup_proportion = 0.2
doc_stride=128
max_query_length=64
```
## Performance
Evaluated on the SQuAD 2.0 English dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
```
"exact": 79.45759285774446,
"f1": 83.79259828925511,
"total": 11873,
"HasAns_exact": 71.96356275303644,
"HasAns_f1": 80.6460053117963,
"HasAns_total": 5928,
"NoAns_exact": 86.93019343986543,
"NoAns_f1": 86.93019343986543,
"NoAns_total": 5945
```
Evaluated on German [MLQA: test-context-de-question-de.json](https://github.com/facebookresearch/MLQA)
```
"exact": 49.34691166703564,
"f1": 66.15582561674236,
"total": 4517,
```
Evaluated on German [XQuAD: xquad.de.json](https://github.com/deepmind/xquad)
```
"exact": 61.51260504201681,
"f1": 78.80206098332569,
"total": 1190,
```
## Usage
### In Haystack
For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in [haystack](https://github.com/deepset-ai/haystack/):
```python
reader = FARMReader(model_name_or_path="deepset/xlm-roberta-large-squad2")
# or
reader = TransformersReader(model="deepset/xlm-roberta-large-squad2",tokenizer="deepset/xlm-roberta-large-squad2")
```
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "deepset/xlm-roberta-large-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
**Branden Chan:** [email protected]
**Timo Mรถller:** [email protected]
**Malte Pietsch:** [email protected]
**Tanay Soni:** [email protected]
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/>
</div>
</div>
[deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2)
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p>
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
| {} | RichardErkhov/deepset_-_xlm-roberta-large-squad2-8bits | null | [
"transformers",
"safetensors",
"xlm-roberta",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"region:us"
] | null | 2024-05-02T09:01:13+00:00 |
text-generation | transformers | {"license": "llama2"} | emrecanacikgoz/hamza-gpt2-xl | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T09:01:31+00:00 |
|
reinforcement-learning | ml-agents |
# **ppo** Agent playing **Worm**
This is a trained model of a **ppo** agent playing **Worm**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: chirbard/ppo-Worm
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
## Hyperparameters
```
behaviors:
Worm:
trainer_type: ppo
hyperparameters:
batch_size: 2024
buffer_size: 20240
learning_rate: 0.0003
beta: 0.005
epsilon: 0.2
lambd: 0.95
num_epoch: 3
learning_rate_schedule: linear
network_settings:
normalize: true
hidden_units: 512
num_layers: 3
vis_encode_type: simple
reward_signals:
extrinsic:
gamma: 0.9995
strength: 1.0
keep_checkpoints: 5
max_steps: 5000000
time_horizon: 1000
summary_freq: 30000
```
| {"library_name": "ml-agents", "tags": ["Worm", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Worm"]} | chirbard/ppo-Worm | null | [
"ml-agents",
"tensorboard",
"onnx",
"Worm",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Worm",
"region:us"
] | null | 2024-05-02T09:01:32+00:00 |
null | null | {"license": "apache-2.0"} | NovaTech86/Mia | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-02T09:02:22+00:00 |
|
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | MoGP/g_x_reg | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:02:50+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | azhara001/donut-base-demo-new-3e-05_AdamW_1876 | null | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:03:56+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | tropianhs/mistral-tweet-finetune-tropianhs-full | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-05-02T09:03:59+00:00 |
text-classification | transformers | {"license": "mit"} | wisnu001binus/hate_speech_detection_ALBERTbase | null | [
"transformers",
"safetensors",
"albert",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:04:10+00:00 |
|
text-generation | transformers |
# Uploaded model
- **Developed by:** waylandzhang
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
Teaching purpose modelใ ่ฟไธชmodelๅชๆฏ้
ๅๆ่ง้ขๆๅญฆ็ฎ็ ๏ผD
**QLoRA (4bit)**
Params to replicate training
Peft Config
```
r=8,
target_modules=[
"q_proj",
"k_proj",
"v_proj",
"o_proj",
"gate_proj",
"up_proj",
"down_proj",
],
lora_alpha=16,
lora_dropout=0,
bias="none",
random_state=3407,
use_rslora=False, # Rank stabilized LoRA
loftq_config=None, # LoftQ
```
Training args
```
per_device_train_batch_size=2,
per_device_eval_batch_size=2,
gradient_accumulation_steps=4, # set to 4 to avoid issues with GPTQ Quantization
warmup_steps=5,
max_steps=300, # Fine-tune iterations
learning_rate=2e-4,
fp16=not torch.cuda.is_bf16_supported(),
bf16=torch.cuda.is_bf16_supported(),
evaluation_strategy="steps",
prediction_loss_only=True,
eval_accumulation_steps=1,
eval_steps=10,
logging_steps=1,
optim="adamw_8bit",
weight_decay=0.01,
lr_scheduler_type="cosine", # instead of "linear"
seed=1337,
output_dir="wayland-files/models",
report_to="wandb", # Log report to W&B
```
**Interernce Code**
```python
from unsloth import FastLanguageModel
import os
import torch
max_seq_length = 4096 # 2048
dtype = None
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="waylandzhang/Llama-3-8b-Chinese-Novel-4bit-lesson-v0.1",
max_seq_length=max_seq_length,
dtype=dtype,
load_in_4bit=load_in_4bit,
device_map="cuda",
attn_implementation="flash_attention_2"
)
FastLanguageModel.for_inference(model) # ไฝฟ็จunsloth็ๆจ็ๆจกๅผๅฏไปฅๅ ๅฟซ2ๅ้ๅบฆ
alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{}
### Input:
{}
### Response:
{}"""
inputs = tokenizer(
[
alpaca_prompt.format(
"็ปไฝ ไธๆฎต่ฏ๏ผๅธฎๆ็ปง็ปญๅไธๅปใ", # ไปปๅกๆไปค
"ๅฐๆๅจ่ฅฟๅฎๅๅขไธ", # ็จๆทๆไปค
"", # output - ็็ฉบไปฅ่ชๅจ็ๆ / ไธ็็ฉบไปฅๅกซๅ
)
], return_tensors="pt").to("cuda")
# Opt 1: ๆๆฌ็ๆ่พๅบ
# outputs = model.generate(**inputs, max_new_tokens=500, use_cache=True)
# print(tokenizer.batch_decode(outputs[0][len(inputs.input_ids[0]):], skip_special_tokens=True))
# Opt 2: ๆถๆฏๆตๅผ่พๅบ
from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer, skip_prompt=True)
_ = model.generate(**inputs, streamer=text_streamer, max_new_tokens=500)
```
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) | {"language": ["en", "zh"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "Chinese", "unsloth", "llama", "trl"], "base_model": "waylandzhang/Llama-3-8b-Chinese-Novel-4bit-lesson-v0.1"} | waylandzhang/Llama-3-8b-Chinese-Novel-4bit-lesson-v0.1 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"Chinese",
"unsloth",
"trl",
"en",
"zh",
"base_model:waylandzhang/Llama-3-8b-Chinese-Novel-4bit-lesson-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"region:us"
] | null | 2024-05-02T09:04:18+00:00 |
null | diffusers |
More information on all the CLI arguments and the environment are available on your [`wandb` run page](https://wandb.ai/sayakpaul/diffusion-orpo-lora-sdxl/runs/23l9mn47).
| {} | sayakpaul/sdxl-orpo-large-beta_orpo-0.005-beta_inner-500-lr-5e-7-scaled-steps-5000-no-warmup | null | [
"diffusers",
"safetensors",
"region:us"
] | null | 2024-05-02T09:04:38+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | youngisk/whisper-large-v2-korea-common_13 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:04:50+00:00 |
sentence-similarity | sentence-transformers |
# COCO0414/DNF-bge-m3
[bge-m3](https://huggingface.co/BAAI/bge-m3) based model. This embedding model for Dungeon & Fighter data.
Trained data, [Dungeon&Fighter](https://df.nexon.com) user community QA board. Extract 79k QA pairs data in 20k articles.
This model was created to use RAG.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
import torch
question = "์ฌ๋๋ค์ ๋ํด์ ์๋ ค์ค"
# Data from namuwiki
reval_content = "์ 6 ์ฌ๋์ 7 ์ฌ๋์ 8 ์ฌ๋๋ฌดํ์ ์๋ก์ฝ๊ฒ์ ์ง๋ณ์ ๋๋ ์ง์๋ถ์ ๋จน๋ ์ํค๊ธด ๋ฐ์ ๋กํฐ์ค์ 9 ์ฌ๋์ 10 ์ฌ๋์ 11 ์ฌ๋์ 12 ์ฌ๋(่) ํญ๋ฃก์ ๋ฐ์นผ(ๆฐ) ๊ฑด์ค์ ๋ฃจํฌ์ฑ์์ ๋ฏธ์นด์๋ผํผ๋์ ์ค์ฆ๋ง์ธ๋ฆฌ์ ํค๋ฅด๋ฏผ ๊ฐ์ด ๋ณด๊ธฐ: ํ
์๋(์ธ๊ณต์ )์ 1 ์ฌ๋ - ์๋ช
์ ์นด์ธ. ์ฃฝ์์ด ๋๋ ค์ํ๋ ์.์ 2 ์ฌ๋ - ์ฐ๋ ๋์ ํ๋. ๋ ๊ฐ์ง ์ผ๊ตด์ ์ฐฌ์ฐํ ๋น๋๋ ์ด์ฌ์ ๊ฐ์ถ ์.[A] ์ 3 ์ฌ๋ - ์ฐฝ๊ณต์ ์ด์์ค-ํ๋ ์ด (ํ๋ ์ด-์ด์์ค). ๋
์ ๋ฐ์ ๋์ง ์๋ ์.์ 4 ์ฌ๋ - ์ ๋ณต์ ์นด์์ผ์ค. ํผ๋ก ๊ฐ์ฒ ์ ์ ์๋ ์.์ 5 ์ฌ๋โ - ๋ฌดํ์ ์๋ก์ฝ. ์ ๋ฐฑ์ ์ผ๊ตด์ ๊ฐ์ก์ผ๋ ๋ณด์ด์ง ์๋ ์. - (์ค๋ฆฌ์ง ์ด์ )๋ก์์๊ฒ ์ฌ๋ง. (์ค๋ฆฌ์ง ์ดํ) 4์ธ์ ์จํ๋ง์คํฐ์๊ฒ ์ฌ๋ง.[4]์ดํ ๊ทธ๋ฆผ์์ปค๊ฐ ์ฌ๋
์ ๋ชจ๋ ๋ชจ์ ๋ค์ ๋ถํํ๊ณ ํ๋์ฑ์ ์ ๋ นํ ๋ค ์ดํผ ๊ณต์
์ง๋๊ฐ ์๋ ์ฒ๊ณ๋ฅผ ๋
ธ๋ฆฌ๊ณ ์๋ค. ๊ทธ๋ฌ๋ ๊ฒฐ๊ตญ ๋ฏธ๋ฆฌ ๋๊ธฐํ๊ณ ์๋ ์ฒ๊ณ๊ตฐ์๊ฒ ์ง์คํฌํ๋ก ๋ ๋ค์ ์ฌ๋ง.์ 6 ์ฌ๋ - ๊ฒ์ ์ง๋ณ์ ๋๋ ์ง์. ๋๋ฌ์ด ํผ๋ฅผ ํ๋ฆฌ๋ ์. - ๋
ธ์ค๋ง์ด์ด๋ก ์ ์ด๋์ด ๋ชจํ๊ฐ์ ๋ฏธ์, ์์ ค๋ฆฌ์์๊ฒ ํจ๋ฐฐํ ๋ค ํ๋์ ๊ณ๋ต์ผ๋ก ์ธํด ์ฐจ์์ ํ์ผ๋ก ๋นจ๋ ค๋ค์ด๊ฐ ์จ ๋ชธ์ด ๊ฐ๊ฐ์ด ์ฐข๊ฒจ ๋๊ฐ๋ฉฐ ๋ด์ธ. ์ค์ ์ ๋ถ์ฌ์ด๊ธฐ์ ์๋ฉธํ์ง ์์๊ณ , ๋ถ์์ ํ ์ฐจ์ ์์์ ๋ค์ ํ์ ๊ธฐ๋ฅด๋ ์ค.์ 7 ์ฌ๋โ - ๋ถ์ ๋จน๋ ์ํค. ๋ชธ์ ๋ป์ด ๋ฅํ ์ธ์ ๋์ ๋ฟ์ ์ ์๋ ์. - ํ์์คํ
์ด์
์ ์ ์ด๋์ด ๋ชจํ๊ฐ์ ์ฒ๊ณ ํ ๋ฒ๊ตฐ์๊ฒ ์ฌ๋ง.์ 8 ์ฌ๋โ - ๊ธด๋ฐ์ ๋กํฐ์ค. ํ๋ฒ์ ์์ฒ์ ๋ฌด๊ธฐ๋ฅผ ์ฅ ์ ์๋ ์. - ๋ฒ ํ๋ชจ์ค๋ก ์ ์ด๋์ด 3์ธ์ ์จํ๋ง์คํฐ๋ค๊ณผ ์ ๊ตญ๊ตฐ, ๋ชจํ๊ฐ์๊ฒ ์ฌ๋ง.(่)์ 9 ์ฌ๋โ - ํญ๋ฃก์ ๋ฐ์นผ. ๋ถ์ ์จ๊ฒฐ์ ๋ด๋ ์. - 500๋
์ ๋ง๊ณ์์ ๋๋ง์ณ ๋์ ์ฒ๊ณ๋ฅผ ์ง๋ฐฐํ๋ ์ค ์ฒ๊ณ ์ ํญ๊ตฐ์ ์ํด ์ฌ๋ง.[5](ๆฐ)์ 9 ์ฌ๋โ - ๊ฑด์ค์ ๋ฃจํฌ. ๋ง ์์ด ํ์ ๋ง์ง๋ ์. - ์์ ์ ๊ฑฐ์ฃผ์ง์ธ ์ฃฝ์ ์์ ์ฑ์์ ์ฌ๋ง.[6]์ 10 ์ฌ๋ - ์ฑ์์ ๋ฏธ์นด์๋ผ. ์ง์ค์ ๊ฟฐ๋ซ์ด ๋ณด๋ ์.[7]์ 11 ์ฌ๋โ - ํผ๋์ ์ค์ฆ๋ง. ์ฃฝ์์์ ์ผ์ด๋ ์. ๊ฒ์ ๋์ง์์ ์นด์์ ์ํด ๋ด์ธ ์ค์ด์์ผ๋, ์ดํ ๋ด์ธ์ด ์์ ํ ํ๋ ค ๋ ๋ฐ๋ค๊ฐ ๋ชจํ๊ฐ๋ค๊ณผ์ ๊ฒฐ์ ์ผ๋ก ์ฌ๋ง.์ 12 ์ฌ๋ - ์ธ๋ฆฌ์ ํค๋ฅด๋ฏผ. ๋น๋ฐ์ ์๊ณ ์๋ ์.[A]ํ์ฌ ํ๋ ์ธ : ์๋ผ๋์ ์ด์ผ๊ธฐ ์งํ ์ํฉ์์ ์ด์์๋ ์ฌ๋๋ ์นด์ธ, ํ๋, ์ด์์ค-ํ๋ ์ด, ์นด์์ผ์ค, ๋๋ ์ง์, ๋ฏธ์นด์๋ผ, ์ธ๋ฆฌ์๊น์ง ์ด 7๋ช
์ด๋ค.์ฌ๋์ ์์๋ ๋ง๊ณ์ ์ฌ๋ผํ ์์์ด๊ธด ํ๋ฐ ์์ด์๋ ์ํฅ์ด ์๊ธด ํ์ง ์ 4์ฌ๋ ์นด์์ผ์ค์ ๊ฒฝ์ฐ ๊ตฌ ์คํ ๋ฆฌ์์๋ ์์ด 4์์ ์ฌ๋๋ผ๊ณ ํ๋ค. ๋ฏธ์นด์๋ผ๋ถํฐ๋ ๋ง๊ณ๋ ์ธ์ฐ์ด ์๋ ์ฌ๋์ด๋ค.3. ๋ชจํฐ๋ธ[ํธ์ง]์ ๋๋ ์์์ ์ด๋ ์ ์์ธ ์ฌ๋์ด๋ค. ํธ์นญ๋ ๊ฐ์๋ฐ๋ค๊ฐ ๋ง์นจ ์ซ์๋ ๋ฑ ์ด๋์ด๊ณ ๊ทธ๋ค ์ค ํ๋๊ฐ ๋ด์ณ์ง๊ณ ๋ค๋ฅธ ์ด๊ฐ ์๋ช
๋ ์ผ๊น์ง ์์์ ์ด๋ ์ฌ๋๋ ๋ฑ ๋ค์ด๋ง๋๋ค.[9] ์ฐฝ์ธ๊ธฐ, ์ข
์ธ๊ธฐ, ์ฐฝ์ ์ธ๊ธฐ๋ก ์ด๋ฃจ์ด์ง ๋ํ์ ์ฑ์๋ ์ฑ๊ฒฝ์ ๋ชจํฐ๋ธ๋ก ํ ๊ฒ์ด๋ฏ๋ก ์ด์ฐ ๋ณด๋ฉด ์์ฐ์ค๋ฌ์ด ์ฐ๊ฒฐ์ด๋ค.๋ํ, ๊ฐ ์ฌ๋๋ค์ ๋ฐ์๋ ๊ฐ๋ณ์ ๋ชจํฐ๋ธ๋ ๋ค์๊ณผ ๊ฐ๋ค. ์ธ๊ธ๋์ง ์์ ์ฌ๋๋ ๋ค์คํ์์ ๋
์์ ์ผ๋ก ์ฐฝ์ํ ์ผ์ด์ค๋ค.์นด์ธ: ์ฑ๊ฒฝ(์๋ด, ์นด์ธ)ํ๋: ์ฑ๊ฒฝ(ํ์)์ด์์ค-ํ๋ ์ด/ํ๋ ์ด-์ด์์ค: ์ฌ๋ผ๋ธ ์ ํ(๋ฒจ๋ก๋ณด๊ทธ, ์ฒด๋ฅด๋
ธ๋ณด๊ทธ), ๋ถ์ ๋ฝ ์ ํ(ํ๋ ์ด), ์ด์งํธ ์ ํ(์ด์์ค) [10]์นด์์ผ์ค: ์ผ๋ณธ ์๊ดด(์ค๋)์๋ก์ฝ: ์ฌํ๋ผ ์ฌ๋ง์ ์ดํ(์๋ก์ฝ)[11]๋๋ ์ง์: ์ง๋ณ(ํ์ฌ๋ณ ๋ฑ)๋กํฐ์ค: ๋ฐ๋ค๊ดด๋ฌผ(ํฌ๋ผ์ผ)๋ฐ์นผ: ์
๋ง์ฑ ๋๋ผํ๋ผ(๋๋ผํ๋ผ ๋ธ๋ผ๋ ์ฒดํ์)๋ฃจํฌ: ์ผํธ ์ ํ(๋ฃจ ๋ผ์๋)๋ฏธ์นด์๋ผ: ๊ฐํจ๋ฆญ(๋ฉ์์)์ค์ฆ๋ง: ์
๋ง์ฑ ๋๋ผํ๋ผ(๋๋ผํ๋ผ ๋ธ๋ผ๋ ์ฒดํ์)์ธ๋ฆฌ์ ํค๋ฅด๋ฏผ: ???3.1. ๋ฌธ์[ํธ์ง]์ฌ๋์๊ฒ ์์ ์ ์์งํ๋ ๊ณ ์ ์ ๋ฌธ์์ด ์กด์ฌํ๋ฉฐ ์ฌ๋์ ๊ด๋ จ๋ ์ฌ๊ฑด์ด ์ผ์ด๋จ์ ๋ฐ๋ผ ๋ฌธ์์ด ๋ณ๊ฒฝ๋๋ ๊ฒฝ์ฐ๋ ์๋ค.[12]์นด์ธ์ ๊ทธ๋ฅผ ๋๋ ค์ํ๊ณ ๊ฒฝ์ธํ๋ ์๋ค์ ์ํด์ ๋ฌธ์์ด ๋ง๋ค์ด์ก๋ค. ํ๋๋ ์์ ์ด ๊ฐ์ง ์ฐ๋ ๋์ ํ์ํํ์ฌ ๋ฌธ์์ ๋ง๋ค์๋ค. ํ
๋ผ์ฝํ์์๋ ์ด ๋ฌธ์์ ์ ์ฑํ๊ฒ ์ฌ๊ธด๋ค. ์ด์์ค-ํ๋ ์ด๋ ๊ทธ๋ฅผ ๋ฐ๋ฅด๋ ์ถ์ข
์๋ค์ ์ํด์ ๋ฌธ์์ด ๋ง๋ค์ด์ก์ผ๋ฉฐ, ๋ฐค์ ๋ง์ฒ๋ฃจ ์ ํฌ ์ดํ์ ์ด์์ค์ ์์ง์ด[13] ์ถ๊ฐ"
model = SentenceTransformer('COCO0414/DNF-bge-m3')
question_embedding = model.encode(question, convert_to_tensor=True)
answer_embedding = model.encode(answer, convert_to_tensor=True)
similarity = torch.nn.functional.cosine_similarity(question_embedding, answer_embedding, dim=0)
print(similarity.item())
# 0.6239629983901978
```
| {"library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"} | COCO0414/DNF-bge-m3 | null | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:05:12+00:00 |
text-generation | transformers |
# ๐ฎ THIS IS UNHOLY ๐ฆ

This model is a proud fork of [Undi95/Unholy-v2-13B](https://huggingface.co/Undi95/Unholy-v2-13B), but we've infused it with [WEMAKE-CX/V41](https://huggingface.co/WEMAKE-CX/V41). The mix of tech with emotional intelligence opens new ways for understanding and interaction.
## Why ๐ WeMake VX-Unholy
We don't only want to advance technology but to do so with consciousness, responsibility, and empathy. VX-Unholy represents one of many ways we take towards this goal.
### Our Goals With VX-Unholy
- **Foster Emotional Intelligence:** Incorporating our V41 emotional intelligence unit, VX-Unholy aims to bring a new depth of understanding and empathy to AI interactions.
- **Promote Ethical AI Use:** We stand firmly for the ethical development and application of AI. This model is shared with the world under the [cc-by-nc-4.0](https://spdx.org/licenses/CC-BY-NC-4.0) license that encourages non-commercial, responsible use.
- **Encourage Innovation and Learning:** We believe in the power of community and shared knowledge. VX-Unholy is not just a tool but an open invitation for curious minds to explore, learn, and innovate together.
### How We Use VX-Unholy
VX-Unholy serves as one of many efforts to develop AI solutions that are not only technologically advanced but also socially conscious and emotionally intelligent. It's used in:
- Research and Development: Enhancing our understanding of AI's potential to comprehend and mimic emotional intelligence.
- Community Projects: Engaging with developers, ethicists, and the broader community to explore innovative uses and promote responsible AI development.
- Educational Programs: Offering resources for learning and exploration to foster the next generation of responsible AI engineers and enthusiasts.
## Our Commitment to Ethics and Sustainability
It's 2024, of course, our commitment to sustainability, ethical responsibility, and equal opportunities shapes every project we take. VX-Unholy reflects these values and our belief in the power of technology to change the world for the betterโwhen guided by ๐ and responsibility.
Thank you for your interest in VX-Unholy. Together, let's make a positive impact on the world, one intelligent interaction at a time.
๐ **The WeMake Team**
[๐ค Ethics Policy](https://wemake.cx/legal/ethics/) [๐ก๏ธ Privacy Policy](https://wemake.cx/legal/privacy) [๐ Imprint](https://wemake.cx/legal/imprint) | {"language": ["en", "de"], "license": "cc-by-nc-4.0", "tags": ["not-for-all-audiences", "nsfw"], "datasets": ["WEMAKE-CX/Intelligent-Content-Understanding"]} | WEMAKE-CX/VX-Unholy-13B | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"not-for-all-audiences",
"nsfw",
"en",
"de",
"dataset:WEMAKE-CX/Intelligent-Content-Understanding",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T09:05:29+00:00 |
summarization | transformers | {"pipeline_tag": "summarization"} | SyedShaheer/bart-large-cnn-samsum_tuned_V2_1 | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"summarization",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:07:20+00:00 |
|
null | null | {} | andersonbcdefg/tiny-emb-2024-05-02_09-08-42 | null | [
"region:us"
] | null | 2024-05-02T09:08:42+00:00 |
|
null | transformers |
# Uploaded model
- **Developed by:** CarolLiu999
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"} | CarolLiu999/llama-3-8b-Instruct-bnb-4bit-lora-TWhealthCare | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:08:42+00:00 |
unconditional-image-generation | diffusers |
# Model Card for Unit 1 of the [Diffusion Models Class ๐งจ](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute ๐ฆ.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('izhl/sd-class-butterflies-32')
image = pipeline().images[0]
image
| {"license": "mit", "tags": ["pytorch", "diffusers", "unconditional-image-generation", "diffusion-models-class"]} | izhl/sd-class-butterflies-32 | null | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2024-05-02T09:09:12+00:00 |
text-classification | transformers | {} | wisnu001binus/hate_speech_detection_RoBERTabase | null | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T09:09:45+00:00 |
|
text-generation | transformers | {} | TwinDoc/H100_stage1_checkpoint-36000 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T09:10:20+00:00 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.