modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
sakasaku/dqn-SpaceInvadersNoFrameskip-v4 | sakasaku | 2024-07-02T07:40:37Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T07:40:37Z | Entry not found |
SamagraDataGov/e2e_testtt | SamagraDataGov | 2024-07-02T07:41:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-07-02T07:41:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
davidyu2023/Qwen-Qwen1.5-0.5B-1719906099 | davidyu2023 | 2024-07-02T07:41:49Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-0.5B",
"region:us"
] | null | 2024-07-02T07:41:40Z | ---
base_model: Qwen/Qwen1.5-0.5B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
Echelon-AI/Med-Qwen2-7B | Echelon-AI | 2024-07-02T14:12:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"dataset:Malikeh1375/medical-question-answering-datasets",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-02T07:41:49Z | ---
license: apache-2.0
datasets:
- Malikeh1375/medical-question-answering-datasets
---
Qwen2 7B, after finetuning on a medical dataset, demonstrates enhanced performance in medical text understanding and generation
### Model Description
The model shows improved accuracy in diagnosing medical conditions, generating specialized medical texts, and responding to medical queries with contextually relevant information. This adaptation equips Med-Qwen2 to support advanced applications in healthcare, offering nuanced insights and precise language processing tailored for medical professionals and patients alike
- **Finetuned from model :** [https://huggingface.co/Qwen/Qwen2-7B-Instruct](Qwen2-7B-Instruct)
[GGUF](https://huggingface.co/Echelon-AI/Med-Qwen2-GGUF)
## Uses
- Diagnosing medical conditions with improved accuracy.
- Generating specialized medical texts and reports.
- Providing contextually relevant responses to medical queries.
- Supporting advanced applications in healthcare with precise language processing. |
gnkbhuvan/phi-2-health | gnkbhuvan | 2024-07-02T08:09:00Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"safetensors",
"phi-msft",
"medical",
"text-generation",
"custom_code",
"en",
"dataset:wangrongsheng/HealthCareMagic-100k-en",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-07-02T07:41:58Z | ---
license: apache-2.0
datasets:
- wangrongsheng/HealthCareMagic-100k-en
language:
- en
metrics:
- accuracy
library_name: adapter-transformers
pipeline_tag: text-generation
tags:
- medical
--- |
kapilrk04/indicbart_multiway_mt_model | kapilrk04 | 2024-07-02T22:31:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-07-02T07:43:19Z | Entry not found |
kazars24/wav2vec2-base-rus-golos-100h-farfield | kazars24 | 2024-07-02T15:22:40Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base-960h",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-07-02T07:44:19Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-base-960h
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-rus-golos-100h-farfield
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-rus-golos-100h-farfield
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
mreza629/coca | mreza629 | 2024-07-02T07:47:50Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-07-02T07:45:13Z | ---
license: apache-2.0
---
|
Uphando-ng/naija_eng_female_accent_tts | Uphando-ng | 2024-07-02T07:53:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vits",
"text-to-speech",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-to-speech | 2024-07-02T07:45:15Z | ---
license: apache-2.0
pipeline_tag: text-to-speech
--- |
Uphando-ng/naija_eng_male_accent_tts | Uphando-ng | 2024-07-02T07:53:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vits",
"text-to-speech",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-to-speech | 2024-07-02T07:45:36Z | ---
license: apache-2.0
pipeline_tag: text-to-speech
--- |
habulaj/1832321302 | habulaj | 2024-07-02T07:45:43Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T07:45:38Z | Entry not found |
CatBarks/flant5small-lora-oasst1_model | CatBarks | 2024-07-02T07:55:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-07-02T07:45:50Z | ---
library_name: transformers
pipeline_tag: question-answering
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
CatBarks/flant5small-lora-oasst1_tokenizer | CatBarks | 2024-07-02T07:45:56Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T07:45:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Dabitron/Qwen2-7B-Instruct-abliterated-Q4_K_M-GGUF | Dabitron | 2024-07-02T07:46:18Z | 0 | 0 | null | [
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:natong19/Qwen2-7B-Instruct-abliterated",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-07-02T07:45:58Z | ---
base_model: natong19/Qwen2-7B-Instruct-abliterated
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# Dabitron/Qwen2-7B-Instruct-abliterated-Q4_K_M-GGUF
This model was converted to GGUF format from [`natong19/Qwen2-7B-Instruct-abliterated`](https://huggingface.co/natong19/Qwen2-7B-Instruct-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/natong19/Qwen2-7B-Instruct-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Dabitron/Qwen2-7B-Instruct-abliterated-Q4_K_M-GGUF --hf-file qwen2-7b-instruct-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Dabitron/Qwen2-7B-Instruct-abliterated-Q4_K_M-GGUF --hf-file qwen2-7b-instruct-abliterated-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Dabitron/Qwen2-7B-Instruct-abliterated-Q4_K_M-GGUF --hf-file qwen2-7b-instruct-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Dabitron/Qwen2-7B-Instruct-abliterated-Q4_K_M-GGUF --hf-file qwen2-7b-instruct-abliterated-q4_k_m.gguf -c 2048
```
|
ThomasSimonini/dqn-SpaceInvadersNoFrameskip-v4BB | ThomasSimonini | 2024-07-02T07:46:45Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-07-02T07:46:16Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 257.00 +/- 38.81
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ThomasSimonini -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ThomasSimonini -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga ThomasSimonini
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
nbadrinath/ikea_room_designs_sd1.5_lora_full_finetuning_020720240714 | nbadrinath | 2024-07-02T10:09:43Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-07-02T07:47:21Z | ---
base_model: runwayml/stable-diffusion-v1-5
library_name: diffusers
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - nbadrinath/ikea_room_designs_sd1.5_lora_full_finetuning_020720240714
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the nbadrinath/ikea_dataset_5.0 dataset. You can find some example images in the following.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
Echelon-AI/Med-Qwen2-GGUF | Echelon-AI | 2024-07-02T08:08:41Z | 0 | 0 | null | [
"gguf",
"license:apache-2.0",
"region:us"
] | null | 2024-07-02T07:48:32Z | ---
license: apache-2.0
---
Qwen2 7B, after finetuning on a medical dataset, demonstrates enhanced performance in medical text understanding and generation
### Model Description
The model shows improved accuracy in diagnosing medical conditions, generating specialized medical texts, and responding to medical queries with contextually relevant information. This adaptation equips Med-Qwen2 to support advanced applications in healthcare, offering nuanced insights and precise language processing tailored for medical professionals and patients alike
- **Finetuned from model :** [https://huggingface.co/Qwen/Qwen2-7B-Instruct](Qwen2-7B-Instruct)
## Uses
- Diagnosing medical conditions with improved accuracy.
- Generating specialized medical texts and reports.
- Providing contextually relevant responses to medical queries.
- Supporting advanced applications in healthcare with precise language processing.
## Sample Outputs:

|
Poxios/STOCK_SOLAR-10.7B-Q4_K_M-GGUF | Poxios | 2024-07-02T07:49:07Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:kihoonlee/STOCK_SOLAR-10.7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T07:48:40Z | ---
base_model: kihoonlee/STOCK_SOLAR-10.7B
library_name: transformers
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# Poxios/STOCK_SOLAR-10.7B-Q4_K_M-GGUF
This model was converted to GGUF format from [`kihoonlee/STOCK_SOLAR-10.7B`](https://huggingface.co/kihoonlee/STOCK_SOLAR-10.7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/kihoonlee/STOCK_SOLAR-10.7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Poxios/STOCK_SOLAR-10.7B-Q4_K_M-GGUF --hf-file stock_solar-10.7b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Poxios/STOCK_SOLAR-10.7B-Q4_K_M-GGUF --hf-file stock_solar-10.7b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Poxios/STOCK_SOLAR-10.7B-Q4_K_M-GGUF --hf-file stock_solar-10.7b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Poxios/STOCK_SOLAR-10.7B-Q4_K_M-GGUF --hf-file stock_solar-10.7b-q4_k_m.gguf -c 2048
```
|
TakuyaGemma/maki | TakuyaGemma | 2024-07-02T07:49:31Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T07:48:59Z | Entry not found |
z3n7r4ck3r/filtered_dataset_20240702_095025 | z3n7r4ck3r | 2024-07-02T07:50:24Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T07:50:24Z | Entry not found |
YoSHiK/whisper-small-ja | YoSHiK | 2024-07-02T09:47:46Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-07-02T07:50:35Z | Entry not found |
flammenai/Mahou-1.3-spark-7B | flammenai | 2024-07-02T12:00:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"dataset:flammenai/FlameMix-DPO-v1",
"dataset:flammenai/MahouMix-v1",
"dataset:flammenai/Grill-Flammen-v1_chatML",
"base_model:arcee-ai/Arcee-Spark",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-02T07:50:47Z | ---
library_name: transformers
license: apache-2.0
base_model:
- arcee-ai/Arcee-Spark
datasets:
- flammenai/FlameMix-DPO-v1
- flammenai/MahouMix-v1
- flammenai/Grill-Flammen-v1_chatML
---

# Mahou-1.3-spark-7B
Mahou is designed to provide short messages in a conversational context. It is capable of casual conversation and character roleplay.
### Chat Format
This model has been trained to use ChatML format. Note the additional tokens in [tokenizer_config.json](tokenizer_config.json).
```
<|im_start|>system
{{system}}<|im_end|>
<|im_start|>{{char}}
{{message}}<|im_end|>
<|im_start|>{{user}}
{{message}}<|im_end|>
```
### Roleplay Format
- Speech without quotes.
- Actions in `*asterisks*`
```
*leans against wall cooly* so like, i just casted a super strong spell at magician academy today, not gonna lie, felt badass.
```
### SillyTavern Settings
1. Use ChatML for the Context Template.
2. Enable Instruct Mode.
3. Use the [Mahou preset](https://huggingface.co/datasets/flammenai/Mahou-ST-ChatML-Instruct/raw/main/Mahou.json).
4. *Recommended* Additonal stopping strings: `["\n", "<|", "</"]`
### Method
Finetuned for 3 epochs using an A100 on Google Colab.
[Fine-tune Llama 3 with ORPO](https://huggingface.co/blog/mlabonne/orpo-llama-3) - [Maxime Labonne](https://huggingface.co/mlabonne) |
Dabitron/Qwen2-7B-Instruct-abliterated-Q6_K-GGUF | Dabitron | 2024-07-02T07:51:27Z | 0 | 0 | null | [
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:natong19/Qwen2-7B-Instruct-abliterated",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-07-02T07:51:01Z | ---
base_model: natong19/Qwen2-7B-Instruct-abliterated
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# Dabitron/Qwen2-7B-Instruct-abliterated-Q6_K-GGUF
This model was converted to GGUF format from [`natong19/Qwen2-7B-Instruct-abliterated`](https://huggingface.co/natong19/Qwen2-7B-Instruct-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/natong19/Qwen2-7B-Instruct-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Dabitron/Qwen2-7B-Instruct-abliterated-Q6_K-GGUF --hf-file qwen2-7b-instruct-abliterated-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Dabitron/Qwen2-7B-Instruct-abliterated-Q6_K-GGUF --hf-file qwen2-7b-instruct-abliterated-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Dabitron/Qwen2-7B-Instruct-abliterated-Q6_K-GGUF --hf-file qwen2-7b-instruct-abliterated-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Dabitron/Qwen2-7B-Instruct-abliterated-Q6_K-GGUF --hf-file qwen2-7b-instruct-abliterated-q6_k.gguf -c 2048
```
|
vgarg/fw_identification_model_e5_large_v7_02_07_2024 | vgarg | 2024-07-02T07:53:22Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | 2024-07-02T07:51:17Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# vgarg/fw_identification_model_e5_large_v7_02_07_2024
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("vgarg/fw_identification_model_e5_large_v7_02_07_2024")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
NeelShrimali/dir | NeelShrimali | 2024-07-02T07:57:54Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T07:51:50Z | ---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** NeelShrimali
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
cyan2k/promptvieh_lora | cyan2k | 2024-07-02T07:54:20Z | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T07:54:19Z | ---
base_model: unsloth/phi-3-mini-4k-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** cyan2k
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
z3n7r4ck3r/filtered_dataset_20240702_095438 | z3n7r4ck3r | 2024-07-02T07:54:37Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T07:54:37Z | Entry not found |
habulaj/8883165414 | habulaj | 2024-07-02T07:56:02Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T07:56:00Z | Entry not found |
Roooy/whisper-tiny-ko3 | Roooy | 2024-07-02T08:22:45Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ko",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-07-02T07:56:06Z | ---
base_model: openai/whisper-tiny
datasets:
- mozilla-foundation/common_voice_11_0
language:
- ko
license: apache-2.0
metrics:
- wer
tags:
- generated_from_trainer
model-index:
- name: Whisper Tiny Ko - Roooy
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Common Voice 17.0
type: mozilla-foundation/common_voice_11_0
config: ko
split: None
args: 'config: ko, split: test'
metrics:
- type: wer
value: 55.80595874713522
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny Ko - Roooy
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 17.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8060
- Cer: 25.1580
- Wer: 55.8060
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer | Wer |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:-------:|
| 0.5449 | 2.2222 | 100 | 0.7823 | 25.3686 | 55.8442 |
| 0.2663 | 4.4444 | 200 | 0.7768 | 25.5893 | 56.3407 |
| 0.1466 | 6.6667 | 300 | 0.7857 | 25.1580 | 55.6532 |
| 0.0953 | 8.8889 | 400 | 0.8016 | 25.3686 | 55.6150 |
| 0.076 | 11.1111 | 500 | 0.8060 | 25.1580 | 55.8060 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
noake/segformer-b0-finetuned-segments-sidewalk-2 | noake | 2024-07-02T07:56:30Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T07:56:30Z | Entry not found |
gassirbek/wav2vec2-large-mms-1b-kaz-colab | gassirbek | 2024-07-02T07:57:18Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T07:57:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tinnguyen/falcon-openassistant-toxicity-increase-30-epochs | tinnguyen | 2024-07-02T08:01:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"falcon",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-02T07:57:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Dabitron/Qwen2-7B-Instruct-abliterated-Q5_K_M-GGUF | Dabitron | 2024-07-02T07:58:14Z | 0 | 0 | null | [
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:natong19/Qwen2-7B-Instruct-abliterated",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-07-02T07:57:51Z | ---
base_model: natong19/Qwen2-7B-Instruct-abliterated
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# Dabitron/Qwen2-7B-Instruct-abliterated-Q5_K_M-GGUF
This model was converted to GGUF format from [`natong19/Qwen2-7B-Instruct-abliterated`](https://huggingface.co/natong19/Qwen2-7B-Instruct-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/natong19/Qwen2-7B-Instruct-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Dabitron/Qwen2-7B-Instruct-abliterated-Q5_K_M-GGUF --hf-file qwen2-7b-instruct-abliterated-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Dabitron/Qwen2-7B-Instruct-abliterated-Q5_K_M-GGUF --hf-file qwen2-7b-instruct-abliterated-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Dabitron/Qwen2-7B-Instruct-abliterated-Q5_K_M-GGUF --hf-file qwen2-7b-instruct-abliterated-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Dabitron/Qwen2-7B-Instruct-abliterated-Q5_K_M-GGUF --hf-file qwen2-7b-instruct-abliterated-q5_k_m.gguf -c 2048
```
|
KasuleTrevor/wav2vec2-large-xls-r-300m-lg-cv-50hr-v1 | KasuleTrevor | 2024-07-02T11:23:53Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T07:58:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Yash0109/diaratechHf_llama49307095-fde8-4f40-b5ce-572eb6bcc729 | Yash0109 | 2024-07-02T08:01:59Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"text-generation",
"conversational",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-07-02T07:59:36Z | ---
base_model: mistralai/Mistral-7B-Instruct-v0.2
datasets:
- generator
library_name: peft
license: apache-2.0
pipeline_tag: text-generation
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: diaratechHf_llama49307095-fde8-4f40-b5ce-572eb6bcc729
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# diaratechHf_llama49307095-fde8-4f40-b5ce-572eb6bcc729
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 2
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.3.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
NgTMDuc/model_weight | NgTMDuc | 2024-07-02T07:59:52Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T07:59:52Z | Entry not found |
adhityaprimandhika/fine-tuned-mdeberta-category-by-notes | adhityaprimandhika | 2024-07-02T08:03:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-07-02T08:00:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lithiumice/smplx_blender_addon_data | lithiumice | 2024-07-02T08:07:37Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T08:00:54Z | Entry not found |
gritli/distilbert-left | gritli | 2024-07-02T08:01:54Z | 0 | 0 | transformers | [
"transformers",
"zero-shot-classification",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | zero-shot-classification | 2024-07-02T08:01:02Z | ---
library_name: transformers
pipeline_tag: zero-shot-classification
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jameswilly991/worldtransdocuments | jameswilly991 | 2024-07-02T08:01:08Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2024-07-02T08:01:08Z | ---
license: other
license_name: worldtransdocuments
license_link: LICENSE
---
|
Mayonnaisu/donut-kompetansebevis-v2 | Mayonnaisu | 2024-07-02T08:01:35Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T08:01:35Z | Entry not found |
QuantFactory/llama3-8B-DarkIdol-2.1-Uncensored-1048K-GGUF | QuantFactory | 2024-07-02T08:54:37Z | 0 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-07-02T08:01:39Z | Entry not found |
hflqf88888/OdysseyAgent-task | hflqf88888 | 2024-07-02T09:20:15Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"qwen",
"text-generation",
"GUI",
"custom_code",
"en",
"zh",
"dataset:OpenGVLab/GUI-Odyssey",
"license:cc-by-4.0",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-07-02T08:02:43Z | ---
license: cc-by-4.0
datasets:
- OpenGVLab/GUI-Odyssey
language:
- en
- zh
tags:
- GUI
---
## OdysseyAgent-random
The OdysseyAgent fine-tuned on Train-Task split. |
AesopX/AA | AesopX | 2024-07-02T08:03:03Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T08:03:03Z | Entry not found |
Moriacrafter/Qwen1.5-7B-4bit_DepressionDetection | Moriacrafter | 2024-07-02T08:07:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-02T08:03:25Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Raelina/Raemu-XL-V4 | Raelina | 2024-07-02T12:40:42Z | 0 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"en",
"base_model:Raelina/Rae-Diffusion-XL-V2",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-07-02T08:04:11Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
tags:
- text-to-image
- stable-diffusion
- safetensors
- stable-diffusion-xl
base_model: Raelina/Rae-Diffusion-XL-V2
---
<style>
.title-container {
display: flex;
justify-content: center;
align-items: center;
height: 100vh; /* Adjust this value to position the title vertically */
}
.title {
font-size: 2.5em;
text-align: center;
color: #333;
font-family: 'Helvetica Neue', sans-serif;
text-transform: uppercase;
letter-spacing: 0.1em;
padding: 0.5em 0;
background: transparent;
}
.title span {
background: -webkit-linear-gradient(45deg, #ff7a52, #a5cff0);
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
}
.custom-table {
table-layout: fixed;
width: 100%;
border-collapse: collapse;
margin-top: 2em;
}
.custom-table td {
width: 50%;
vertical-align: top;
padding: 10px;
box-shadow: 0px 0px 0px 0px rgba(0, 0, 0, 0.15);
}
.custom-image-container {
position: relative;
width: 100%;
margin-bottom: 0em;
overflow: hidden;
border-radius: 10px;
transition: transform .7s;
}
.custom-image-container:hover {
transform: scale(1.05);
}
.custom-image {
width: 100%;
height: auto;
object-fit: cover;
border-radius: 10px;
transition: transform .7s;
margin-bottom: 0em;
}
.nsfw-filter {
filter: blur(8px);
transition: filter 0.3s ease;
}
.custom-image-container:hover .nsfw-filter {
filter: none;
}
.overlay {
position: absolute;
bottom: 0;
left: 0;
right: 0;
color: white;
width: 100%;
height: 40%;
display: flex;
flex-direction: column;
justify-content: center;
align-items: center;
font-size: 1vw;
font-style: bold;
text-align: center;
opacity: 0;
background: linear-gradient(0deg, rgba(0, 0, 0, 0.8) 60%, rgba(0, 0, 0, 0) 100%);
transition: opacity .5s;
}
.custom-image-container:hover .overlay {
opacity: 1;
}
.overlay-text {
background: linear-gradient(45deg, #F1F8E8, #F1F8E8);
-webkit-background-clip: text;
color: transparent;
text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.7);
}
.overlay-subtext {
font-size: 0.75em;
margin-top: 0.5em;
font-style: italic;
}
.overlay,
.overlay-subtext {
text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.5);
}
</style>
<h1 class="title">
<span>Raemu XL V4</span>
</h1>
<table class="custom-table">
<tr>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/64b24543eec33e27dc9a6eca/KmLBQhfvwNfPire0wsJ2r.png" alt="Sample Image 1">
<div class="overlay">
<div class="overlay-text">Mizuno Ai</div>
</div>
</div>
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/64b24543eec33e27dc9a6eca/xzyMh8h90rZDVxpfZq0UD.png" alt="Sample Image 2">
<div class="overlay">
<div class="overlay-text">Cecillia Alcot</div>
</div>
</div>
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/64b24543eec33e27dc9a6eca/wcga0-fZPfToZOc8ygpkN.png" alt="Sample Image 3">
<div class="overlay">
<div class="overlay-text">Miia</div>
</div>
</div>
</td>
</tr>
<tr>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/64b24543eec33e27dc9a6eca/L2YsLdZsPZMlWrkEUPyvv.png" alt="Sample Image 4">
<div class="overlay">
<div class="overlay-text">Akaza Akari</div>
</div>
</div>
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/64b24543eec33e27dc9a6eca/dhs2BM3HDZ9VOAzyb-6IJ.jpeg" alt="Sample Image 5">
<div class="overlay">
<div class="overlay-text">Lili</div>
</div>
</div>
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/64b24543eec33e27dc9a6eca/pfH6AwI389cTm1KLyhnt2.png" alt="Sample Image 6">
<div class="overlay">
<div class="overlay-text">Lucy Heartfilia</div>
</div>
</div>
</td>
</tr>
</table>
## Overview
**Raemu XL V4** is a merged model that focused in 2.5D Anime
## Model Details
- **Developed by**: [Raelina](https://civitai.com/user/Raelina)
- **Model type**: Diffusion-based text-to-image generative model
- **Model Description**: Generate high-quality anime images from textual prompts
- **License**: [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/)
- **Finetuned from**: [Rae Diffusion XL V2](https://huggingface.co/Raelina/Rae-Diffusion-XL-V2)
### Usage Guidelines
## Tag Ordering
For optimal results, it's recommended to follow the structured prompt template because we train the model like this:
```
1girl/1boy, character name, from which series, everything else in any order.
```
## Special Tag
Raemu XL inherits special tags from Rae DIffusion XL V2 to enhance image generation by steering results toward quality and aesthetic. While the model can generate images without these tags, using them helps achieve better results.
- **Quality tags:** masterpiece, best quality, great quality, good quality, normal quality, low quality, worst quality
- **Aesthetic tags:** very aesthetic, aesthetic, displeasing, very displeasing
## Recommended settings
- **Positive prompts:**
```
masterpiece, best quality, very aesthetic, absurdres,
```
- **Negative prompts:**
```
(low quality, worst quality:1.2), very displeasing, ugly, poorly drawn, signature, watermark,
```
- **CFG:** 7
- **Sampling steps:** 25 to 35
- **Sampler:** Euler a
- **Supported Resolution:**
```
1024 x 1024, 1152 x 896, 896 x 1152, 1216 x 832, 832 x 1216, 1344 x 768, 768 x 1344, 1536 x 640, 640 x 1536
```
## Hires.fix Setting
- **Upscaler:** [4x_NMKD-YandereNeoXL](https://nmkd.de/?esrgan)
- **Hires step:** 10-15
- **Denoising:** 0.1-0.3 or 0.55 for latent upscaler
## Merge Parameter
1. Rae Diffusion XL V2 merged to RealCartoonXL V6 using MBW (0.0,1.0,0.8,0.5,0.25,0.0,0.0,0.0,0.0,0.0,0.0,0.3,0.5,0.71,1.0,0.56,0.71,1.0,0.83,0.1,0)
2. (1) merged with Blue Pencil XL v3.1.0 using MBW (0.0,0.11,0.22,0.33,0.44,0.55,0.44,0.33,0.22,0.11,0.0,0.11,0.22,0.33,0.44,0.55,0.44,0.33,0.22,0.11,0)
3. Raemu XL V4
## License
Raemu XL V4 uses the [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/) inherited from Rae Diffusion XL V2, compatible with Stable Diffusion models. Key points:
1. **Modification Sharing:** If you modify Raemu XL V4, you must share both your changes and the original license.
2. **Source Code Accessibility:** If your modified version is network-accessible, provide a way (like a download link) for others to get the source code. This applies to derived models too.
3. **Distribution Terms:** Any distribution must be under this license or another with similar rules.
4. **Compliance:** Non-compliance must be fixed within 30 days to avoid license termination, emphasizing transparency and adherence to open-source values.
|
anushaporwal/wav2vec2-common_voice-tr-demo-mini-multiGPU-tr | anushaporwal | 2024-07-02T08:04:31Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T08:04:31Z | Entry not found |
necrobradley/face_predict | necrobradley | 2024-07-02T08:33:00Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-07-02T08:04:55Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: face_predict
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train[:800]
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5625
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# face_predict
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2322
- Accuracy: 0.5625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 6
- total_train_batch_size: 192
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.9 | 3 | 2.0747 | 0.1187 |
| No log | 1.8 | 6 | 2.0728 | 0.1375 |
| 2.0713 | 3.0 | 10 | 2.0449 | 0.2 |
| 2.0713 | 3.9 | 13 | 2.0225 | 0.2562 |
| 2.0713 | 4.8 | 16 | 1.9779 | 0.2938 |
| 1.9642 | 6.0 | 20 | 1.8985 | 0.3688 |
| 1.9642 | 6.9 | 23 | 1.8440 | 0.4188 |
| 1.9642 | 7.8 | 26 | 1.7593 | 0.4437 |
| 1.7442 | 9.0 | 30 | 1.6551 | 0.4875 |
| 1.7442 | 9.9 | 33 | 1.5996 | 0.4875 |
| 1.7442 | 10.8 | 36 | 1.5324 | 0.5188 |
| 1.5402 | 12.0 | 40 | 1.5053 | 0.525 |
| 1.5402 | 12.9 | 43 | 1.4543 | 0.5188 |
| 1.5402 | 13.8 | 46 | 1.4335 | 0.5188 |
| 1.4064 | 15.0 | 50 | 1.3768 | 0.5938 |
| 1.4064 | 15.9 | 53 | 1.3583 | 0.6 |
| 1.4064 | 16.8 | 56 | 1.3464 | 0.575 |
| 1.2844 | 18.0 | 60 | 1.3245 | 0.6125 |
| 1.2844 | 18.9 | 63 | 1.3265 | 0.5563 |
| 1.2844 | 19.8 | 66 | 1.2899 | 0.5813 |
| 1.1834 | 21.0 | 70 | 1.2863 | 0.5625 |
| 1.1834 | 21.9 | 73 | 1.2939 | 0.5687 |
| 1.1834 | 22.8 | 76 | 1.2508 | 0.5938 |
| 1.1046 | 24.0 | 80 | 1.2604 | 0.5563 |
| 1.1046 | 24.9 | 83 | 1.2344 | 0.6062 |
| 1.1046 | 25.8 | 86 | 1.2124 | 0.6125 |
| 1.0379 | 27.0 | 90 | 1.2053 | 0.6312 |
| 1.0379 | 27.9 | 93 | 1.3067 | 0.5375 |
| 1.0379 | 28.8 | 96 | 1.2247 | 0.5875 |
| 1.0064 | 30.0 | 100 | 1.2060 | 0.625 |
| 1.0064 | 30.9 | 103 | 1.2308 | 0.575 |
| 1.0064 | 31.8 | 106 | 1.1936 | 0.6188 |
| 0.9611 | 33.0 | 110 | 1.2257 | 0.5938 |
| 0.9611 | 33.9 | 113 | 1.2302 | 0.5563 |
| 0.9611 | 34.8 | 116 | 1.2172 | 0.6 |
| 0.9351 | 36.0 | 120 | 1.2355 | 0.55 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
QuantFactory/llama3-8B-DarkIdol-2.0-Uncensored-GGUF | QuantFactory | 2024-07-02T08:56:45Z | 0 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-07-02T08:05:52Z | Entry not found |
purelife/XV8 | purelife | 2024-07-03T00:47:39Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2024-07-02T08:06:20Z | ---
license: openrail
---
|
snowian/emotion | snowian | 2024-07-02T08:09:18Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-07-02T08:08:39Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9285
- name: F1
type: f1
value: 0.92867427809199
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1702
- Accuracy: 0.9285
- F1: 0.9287
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8067 | 1.0 | 250 | 0.2883 | 0.9115 | 0.9115 |
| 0.2204 | 2.0 | 500 | 0.1883 | 0.9295 | 0.9299 |
| 0.1495 | 3.0 | 750 | 0.1702 | 0.9285 | 0.9287 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
hflqf88888/OdysseyAgent-app | hflqf88888 | 2024-07-02T09:22:35Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"qwen",
"text-generation",
"GUI",
"custom_code",
"en",
"zh",
"dataset:OpenGVLab/GUI-Odyssey",
"license:cc-by-4.0",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-07-02T08:09:18Z | ---
license: cc-by-4.0
datasets:
- OpenGVLab/GUI-Odyssey
language:
- en
- zh
tags:
- GUI
---
## OdysseyAgent-app
The OdysseyAgent fine-tuned on Train-App split. |
prasunjeet/falcon-7b-sharded-bf16-finetuned-treccast | prasunjeet | 2024-07-02T13:28:55Z | 0 | 0 | null | [
"text-generation",
"region:us"
] | text-generation | 2024-07-02T08:09:36Z | ---
pipeline_tag: text-generation
--- |
Stable-Diffusion-PT/Stable-image-transformer-weight | Stable-Diffusion-PT | 2024-07-02T08:09:51Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T08:09:51Z | Entry not found |
glfarnold/results | glfarnold | 2024-07-02T12:12:16Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"base_model:t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | summarization | 2024-07-02T08:09:52Z | ---
license: apache-2.0
base_model: t5-small
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7600
- Rouge1: 0.1317
- Rouge2: 0.1118
- Rougel: 0.1232
- Rougelsum: 0.1232
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 1.0 | 250 | 1.8935 | 0.04 | 0.0359 | 0.0375 | 0.0375 |
| 2.8412 | 2.0 | 500 | 1.7864 | 0.1089 | 0.0934 | 0.0954 | 0.0954 |
| 2.8412 | 3.0 | 750 | 1.7600 | 0.1317 | 0.1118 | 0.1232 | 0.1232 |
### Framework versions
- Transformers 4.42.3
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
FPHam/L3-8B-Everything-COT | FPHam | 2024-07-02T18:18:09Z | 0 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llm",
"llama3",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-02T08:10:03Z | ---
tags:
- llm
- llama
- llama3
---
<!-- header start -->
<div style="display: flex; flex-direction: column; align-items: center;">
</div>
<div style="width: 100%;">
<img src="https://huggingface.co/FPHam/L3-8B-Everything-COT/resolve/main/cot2.png" alt="COT EVERYTHING" style="width: 40%; min-width: 200px; display: block; margin: auto;">
</div>
<div style="display: flex; flex-direction: column; align-items: center;">
<p><a href="https://ko-fi.com/Q5Q5MOB4M">Buy me Ko-fi</a></p>
</div>
<!-- header end -->
Legal: This model is based on Llama-3-8b, and is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://llama.meta.com/llama3/license/)
**Everything COT** is an investigative self-reflecting general model that uses Chain of Thought for everything. And I mean everything.
Instead of confidently proclaiming something (or confidently hallucinating other things) like most models, it caries an internal dialogue with itself and often cast doubts over uncertain topics while looking at it from various sides.
>What's your name?
>To answer this question, we must first understand the context. The question "What's your name?" is typically addressed to a human being. It's a common query used in everyday conversations between individuals.
>
>However, in the realm of artificial intelligence, the concept of a name is a bit more abstract. Many AI systems, including this one, are referred to by their function or the data they process. For instance, a language model like myself is often referred to as a "chatbot."
>
>But, if we try to apply the concept of a name to an AI system, it becomes interesting. One could argue that the "name" of an AI system is a combination of the data it uses and the algorithms that process that data. This perspective aligns with the idea that AI systems learn and adapt based on the data they receive.
>
>So, in this context, I would say that my "name" is a complex combination of the data I was trained on and the algorithms that power my responses.
**Everything COT uses Llama 3 instruct template**
The correct jinja chat_template is in tokenizer_config.json
It was NOT trained with a system message, amd you can further use various system messages to steer the model.
**Parameters**
It's up to you to discover the best parameters that works.
I tested it in oobabooga WebUi using very off-the-shelf min_p preset: Temperature: 1, Top_p: 1, Top_k: 0, Typical_p: 1, min_p: 0.05, repetition_penalty: 1
Different parameters, like temperature will affect the models talkativnes and self-reflecting properties. If you find something really good, let me know and I'll post it here.
|
AesopX/123 | AesopX | 2024-07-02T08:11:36Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T08:10:25Z | # Use a pipeline as a high-level helper
from transformers import pipeline
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe = pipeline("text-generation", model="shenzhi-wang/Llama3-8B-Chinese-Chat")
pipe(messages)
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("shenzhi-wang/Llama3-8B-Chinese-Chat")
model = AutoModelForCausalLM.from_pretrained("shenzhi-wang/Llama3-8B-Chinese-Chat") |
RULES007/test | RULES007 | 2024-07-02T08:11:48Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-07-02T08:11:48Z | ---
license: apache-2.0
---
|
NgTMDuc/weight | NgTMDuc | 2024-07-02T12:17:39Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T08:12:03Z | Entry not found |
quissuiven/donut-ktp-v2-test-2 | quissuiven | 2024-07-02T08:22:47Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T08:13:06Z | ---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-ktp-v2-test-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-ktp-v2-test-2
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.42.3
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
sarahai/whisper-medium-uzbek | sarahai | 2024-07-02T08:13:24Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T08:13:24Z | Entry not found |
rbhatia46/bge-base-financial-nvidia-matryoshka | rbhatia46 | 2024-07-02T08:16:31Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:6300",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:BAAI/bge-base-en-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | 2024-07-02T08:16:22Z | ---
base_model: BAAI/bge-base-en-v1.5
datasets: []
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:6300
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: As of December 31, 2023, deferred revenues for unsatisfied performance
obligations consisted of $769 million related to Hilton Honors that will be recognized
as revenue over approximately the next two years.
sentences:
- How many shares of common stock were issued in both 2022 and 2023?
- What is the projected timeline for recognizing revenue from deferred revenues
related to Hilton Honors as of December 31, 2023?
- What acquisitions did CVS Health Corporation complete in 2023 to enhance their
care delivery strategy?
- source_sentence: If a good or service does not qualify as distinct, it is combined
with the other non-distinct goods or services within the arrangement and these
combined goods or services are treated as a single performance obligation for
accounting purposes. The arrangement's transaction price is then allocated to
each performance obligation based on the relative standalone selling price of
each performance obligation.
sentences:
- What does the summary table indicate about the company's activities at the end
of 2023?
- What governs the treatment of goods or services that are not distinct within a
contractual arrangement?
- What is the basis for the Company to determine the Standalone Selling Price (SSP)
for each distinct performance obligation in contracts with multiple performance
obligations?
- source_sentence: As of January 2023, the maximum daily borrowing capacity under
the commercial paper program was approximately $2.75 billion.
sentences:
- What is the maximum daily borrowing capacity under the commercial paper program
as of January 2023?
- When does the Company's fiscal year end?
- How much cash did acquisition activities use in 2023?
- source_sentence: Federal Home Loan Bank borrowings had an interest rate of 4.59%
in 2022, which increased to 5.14% in 2023.
sentences:
- By what percentage did the company's capital expenditures increase in fiscal 2023
compared to fiscal 2022?
- What is the significance of Note 13 in the context of legal proceedings described
in the Annual Report on Form 10-K?
- How much did the Federal Home Loan Bank borrowings increase in terms of interest
rates from 2022 to 2023?
- source_sentence: The design of the Annual Report, with the consolidated financial
statements placed immediately after Part IV, enhances the integration of financial
data by maintaining a coherent structure.
sentences:
- How does the structure of the Annual Report on Form 10-K facilitate the integration
of the consolidated financial statements?
- Where can one find the Glossary of Terms and Acronyms in Item 8?
- What part of the annual report contains the consolidated financial statements
and accompanying notes?
model-index:
- name: BGE base Financial Matryoshka
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.6957142857142857
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8171428571428572
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8628571428571429
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6957142857142857
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2723809523809524
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17257142857142854
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08999999999999998
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6957142857142857
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8171428571428572
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8628571428571429
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7971144469297426
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7641831065759639
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7681728985040082
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.6942857142857143
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.81
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8514285714285714
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6942857142857143
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.27
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17028571428571426
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6942857142857143
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.81
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8514285714285714
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7951260604161544
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7617998866213151
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7658003405075238
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.7014285714285714
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7971428571428572
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.85
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8885714285714286
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7014285714285714
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.26571428571428574
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16999999999999998
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08885714285714284
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7014285714285714
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7971428571428572
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.85
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8885714285714286
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.793266992460996
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7629580498866213
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7678096436855835
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.6957142857142857
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8014285714285714
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8357142857142857
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8842857142857142
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6957142857142857
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2671428571428571
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16714285714285712
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08842857142857141
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6957142857142857
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8014285714285714
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8357142857142857
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8842857142857142
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.787378246207931
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7566984126984126
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7613545312565108
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.6571428571428571
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7871428571428571
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8285714285714286
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8757142857142857
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6571428571428571
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2623809523809524
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1657142857142857
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08757142857142856
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6571428571428571
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7871428571428571
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8285714285714286
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8757142857142857
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7655516319615892
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7303951247165531
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7349875161463472
name: Cosine Map@100
---
# BGE base Financial Matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("rbhatia46/bge-base-financial-nvidia-matryoshka")
# Run inference
sentences = [
'The design of the Annual Report, with the consolidated financial statements placed immediately after Part IV, enhances the integration of financial data by maintaining a coherent structure.',
'How does the structure of the Annual Report on Form 10-K facilitate the integration of the consolidated financial statements?',
'Where can one find the Glossary of Terms and Acronyms in Item 8?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.6957 |
| cosine_accuracy@3 | 0.8171 |
| cosine_accuracy@5 | 0.8629 |
| cosine_accuracy@10 | 0.9 |
| cosine_precision@1 | 0.6957 |
| cosine_precision@3 | 0.2724 |
| cosine_precision@5 | 0.1726 |
| cosine_precision@10 | 0.09 |
| cosine_recall@1 | 0.6957 |
| cosine_recall@3 | 0.8171 |
| cosine_recall@5 | 0.8629 |
| cosine_recall@10 | 0.9 |
| cosine_ndcg@10 | 0.7971 |
| cosine_mrr@10 | 0.7642 |
| **cosine_map@100** | **0.7682** |
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.6943 |
| cosine_accuracy@3 | 0.81 |
| cosine_accuracy@5 | 0.8514 |
| cosine_accuracy@10 | 0.9 |
| cosine_precision@1 | 0.6943 |
| cosine_precision@3 | 0.27 |
| cosine_precision@5 | 0.1703 |
| cosine_precision@10 | 0.09 |
| cosine_recall@1 | 0.6943 |
| cosine_recall@3 | 0.81 |
| cosine_recall@5 | 0.8514 |
| cosine_recall@10 | 0.9 |
| cosine_ndcg@10 | 0.7951 |
| cosine_mrr@10 | 0.7618 |
| **cosine_map@100** | **0.7658** |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.7014 |
| cosine_accuracy@3 | 0.7971 |
| cosine_accuracy@5 | 0.85 |
| cosine_accuracy@10 | 0.8886 |
| cosine_precision@1 | 0.7014 |
| cosine_precision@3 | 0.2657 |
| cosine_precision@5 | 0.17 |
| cosine_precision@10 | 0.0889 |
| cosine_recall@1 | 0.7014 |
| cosine_recall@3 | 0.7971 |
| cosine_recall@5 | 0.85 |
| cosine_recall@10 | 0.8886 |
| cosine_ndcg@10 | 0.7933 |
| cosine_mrr@10 | 0.763 |
| **cosine_map@100** | **0.7678** |
#### Information Retrieval
* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.6957 |
| cosine_accuracy@3 | 0.8014 |
| cosine_accuracy@5 | 0.8357 |
| cosine_accuracy@10 | 0.8843 |
| cosine_precision@1 | 0.6957 |
| cosine_precision@3 | 0.2671 |
| cosine_precision@5 | 0.1671 |
| cosine_precision@10 | 0.0884 |
| cosine_recall@1 | 0.6957 |
| cosine_recall@3 | 0.8014 |
| cosine_recall@5 | 0.8357 |
| cosine_recall@10 | 0.8843 |
| cosine_ndcg@10 | 0.7874 |
| cosine_mrr@10 | 0.7567 |
| **cosine_map@100** | **0.7614** |
#### Information Retrieval
* Dataset: `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:----------|
| cosine_accuracy@1 | 0.6571 |
| cosine_accuracy@3 | 0.7871 |
| cosine_accuracy@5 | 0.8286 |
| cosine_accuracy@10 | 0.8757 |
| cosine_precision@1 | 0.6571 |
| cosine_precision@3 | 0.2624 |
| cosine_precision@5 | 0.1657 |
| cosine_precision@10 | 0.0876 |
| cosine_recall@1 | 0.6571 |
| cosine_recall@3 | 0.7871 |
| cosine_recall@5 | 0.8286 |
| cosine_recall@10 | 0.8757 |
| cosine_ndcg@10 | 0.7656 |
| cosine_mrr@10 | 0.7304 |
| **cosine_map@100** | **0.735** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 6,300 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:-----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 45.53 tokens</li><li>max: 222 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 20.3 tokens</li><li>max: 45 tokens</li></ul> |
* Samples:
| positive | anchor |
|:---------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------|
| <code>Acquisition activity used cash of $765 million in 2023, primarily related to a Beauty acquisition.</code> | <code>How much cash did acquisition activities use in 2023?</code> |
| <code>In a financial report, Part IV Item 15 includes Exhibits and Financial Statement Schedules as mentioned.</code> | <code>What content can be expected under Part IV Item 15 in a financial report?</code> |
| <code>we had more than 8.3 million fiber consumer wireline broadband customers, adding 1.1 million during the year.</code> | <code>How many fiber consumer wireline broadband customers did the company have at the end of the year?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 |
|:----------:|:------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:|
| 0.8122 | 10 | 1.5751 | - | - | - | - | - |
| 0.9746 | 12 | - | - | - | - | - | 0.7580 |
| 0.8122 | 10 | 0.6362 | - | - | - | - | - |
| 0.9746 | 12 | - | 0.7503 | 0.7576 | 0.7653 | 0.7282 | 0.7638 |
| 1.6244 | 20 | 0.4426 | - | - | - | - | - |
| 1.9492 | 24 | - | 0.7544 | 0.7662 | 0.7640 | 0.7311 | 0.7676 |
| 2.4365 | 30 | 0.3217 | - | - | - | - | - |
| 2.9239 | 36 | - | 0.7608 | 0.7684 | 0.7662 | 0.7341 | 0.7686 |
| 3.2487 | 40 | 0.2761 | - | - | - | - | - |
| **3.8985** | **48** | **-** | **0.7614** | **0.7678** | **0.7658** | **0.735** | **0.7682** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.6
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.31.0
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
Yash0109/diaratechHf_llama188fe336-1221-4050-af76-a84e95bc6450 | Yash0109 | 2024-07-02T08:18:52Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"text-generation",
"conversational",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-07-02T08:16:30Z | ---
base_model: mistralai/Mistral-7B-Instruct-v0.2
datasets:
- generator
library_name: peft
license: apache-2.0
pipeline_tag: text-generation
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: diaratechHf_llama188fe336-1221-4050-af76-a84e95bc6450
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# diaratechHf_llama188fe336-1221-4050-af76-a84e95bc6450
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 2
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.3.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
lysandre/test-4 | lysandre | 2024-07-02T08:40:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-02T08:18:36Z | ---
license: mit
---
This one with a custom `config.head_dim` as allowed by the architecture (see 7b model). |
SQAI/bge-embedding-model3 | SQAI | 2024-07-02T08:19:43Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:397",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:SQAI/bge-embedding-model",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | 2024-07-02T08:19:22Z | ---
base_model: SQAI/bge-embedding-model
datasets: []
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:397
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Time taken for the streetlight to activate or light up from the
command
sentences:
- '"Can you provide a report showing each unique streetlight identifier along with
its power usage recorded in watts for the last three months, as well as instances
where power consumption was higher than expected, potentially indicating faults
(failures), or where the voltage supplied was below the safe operating level (failure)
for the same period?"'
- failure reasons for dysfunctional streetlights for geozone = 233
- '"Can you provide me with a comprehensive report for the past year showing the
maximum load current, internal temperature, activation time, wireless signal strength
of each streetlight in the group? Also, can you include metering component faults,
instances where power under load was higher than normal and the threshold level
for flickering? I''d like to understand the performance of the driver and instances
where voltage under load was lower than expected. Finally, could you also specify
the maximum latitude covered by this streetlight group?"'
- source_sentence: Power consumption is lower than expected, possibly due to hardware
issues (failure)
sentences:
- '"What is the total power consumption in watts of the ''Main Boulevard Group''
streetlights in the ''Grid B3'' Y-coordinate during the last 3 months, including
instances where the supplied voltage was below the safe operating level causing
failures? Also, how many complete power losses did this group of streetlights
experience during this period due to supply issues or damage, and was there any
instance where the ''Mode 4'' control switch or the ''Broadcast ID 9'' subscription
was used for troubleshooting or restoring the lights?"'
- '"What is the failure count for the last 3 months for the streetlight with ID
''unique streetlight identifier'' located in ''time zone where the streetlight
is located'', specifically on ''Name of the street for the streetlight in error'',
considering failures include lower than average power consumption affirming a
hardware issue as identified by ''hardware version of the streetlight'' and difficulties
in network connection for remote management? What is the IMEI number of the affected
device and what''s the minimum current level it reached to be considered abnormal?
Additionally, could you provide the maximum latitude of the geographic area covered
by this group of streetlights?"'
- '"Can you give me a breakdown of the total operational usage, active controller
hours, and dimming controls type for each streetlight within a specified longitude
range over the last six months, as well as any instances of relay status changes?
And, could you also list the number of instances where the voltage under load
was higher than expected, faults in the metering components, and any cases of
complete power loss during this period? Additionally, can you inform me the strength
of the wireless signal received by each streetlight''s communication module?"'
- source_sentence: Upper voltage limit considered safe and efficient for streetlight
operation
sentences:
- '"Can you pull a report detailing if there have been any instances where the power
consumption was unusually high, potentially indicating faults over the last 3
months for the streetlight device with IMEI number XYZ, and provide the total
hours of operation for its controller during that time? Include in this report
the last timestamp when the threshold settings were updated, the level of that
threshold for recording flickering, measured in occurrences, and the Y-coordinate
for its location in the streetlight grid layout?"'
- '"What is the current dimming level of the streetlight in operation, and have
there been any instances where the voltage exceeded the safe operating level causing
a failure? Additionally, I would like to know the total energy consumed, recorded
in kilowatt-hours, for a specific group of streetlights and their efficiency of
power usage represented by the power factor. Could you also tell me what type
of DALI dimming protocol this group of streetlights is using, and the SIM card
code used for their cellular network communication? Furthermore, what is the upper
voltage limit considered safe and efficient for their operation and when was the
latest data recorded or action performed by these streetlights?"'
- '"What is the minimum load current that suggests suboptimal operation, the range
of current indicating potential issues, the name of a group of streetlights, their
drawn electrical current measure in amperes, details of faults in the link control
mechanism, the geographic longitude range they cover, the identifier for their
broadcast subscription, their maximum safe voltage under load conditions, minimum
abnormal current level, and their linking rights for synchronized control?"'
- source_sentence: IMEI number of the streetlight device
sentences:
- '"What is the failure count for the last 3 months for the streetlight with ID
''unique streetlight identifier'' located in ''time zone where the streetlight
is located'', specifically on ''Name of the street for the streetlight in error'',
considering failures include lower than average power consumption affirming a
hardware issue as identified by ''hardware version of the streetlight'' and difficulties
in network connection for remote management? What is the IMEI number of the affected
device and what''s the minimum current level it reached to be considered abnormal?
Additionally, could you provide the maximum latitude of the geographic area covered
by this group of streetlights?"'
- '"Can you provide the description of the group of streetlights, along with the
geographic zone identifier of the streetlight that recently encountered a failure?
Could you also include the last updated timestamp of the threshold settings, the
maximum load current that indicates potential risk or overload, the frequency
of the electricity supply measured in hertz, and the minimum power usage level
below which it is considered abnormal for these streetlights?"'
- '"What is the X-coordinate for the group of streetlights in a grid layout that
have an operational age of more than 10,000 hours, registered a lower lux level
below which additional lighting may be necessary and have had general faults related
to light output, possibly including those that remain on during daylight hours
due to sensor faults, despite the ambient light level detected being adequate?
Also, could you provide the records of those which have the rights or permissions
to synchronise control across multiple streetlights, the current electrical current
drawn by each of these streetlights, the minimum operational voltage under their
load conditions, and any instance where they reached the threshold levels to continue
recording as flickering?"'
- source_sentence: Count of how many times the streetlight has been switched on
sentences:
- '"What is the type of DALI dimming protocol used by our streetlights, do we have
the necessary permissions to link multiple of these streetlights for synchronized
control, how many times in the past 3 months has the supplied voltage dropped
below the safe operating level causing failure, and what is the minimum current
level below which the streetlight operation is considered abnormal?"'
- '"Can you provide me with a comprehensive report for the past year showing the
maximum load current, internal temperature, activation time, wireless signal strength
of each streetlight in the group? Also, can you include metering component faults,
instances where power under load was higher than normal and the threshold level
for flickering? I''d like to understand the performance of the driver and instances
where voltage under load was lower than expected. Finally, could you also specify
the maximum latitude covered by this streetlight group?"'
- '"What is the total count of times the streetlight has been switched on and what
were the ambient light levels, measured in lux, at those instances?"'
model-index:
- name: BGE base Financial Matryoshka
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.022222222222222223
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.044444444444444446
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.17777777777777778
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.2222222222222222
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.022222222222222223
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.014814814814814814
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.035555555555555556
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.022222222222222223
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.022222222222222223
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.044444444444444446
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.17777777777777778
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.2222222222222222
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.10314853022641256
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.06666666666666668
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.09055451253282244
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.022222222222222223
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.044444444444444446
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.17777777777777778
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.2222222222222222
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.022222222222222223
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.014814814814814814
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.035555555555555556
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.022222222222222223
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.022222222222222223
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.044444444444444446
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.17777777777777778
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.2222222222222222
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.10314853022641256
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.06666666666666668
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.09055451253282244
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.022222222222222223
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.044444444444444446
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.13333333333333333
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.26666666666666666
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.022222222222222223
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.014814814814814814
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.026666666666666665
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.026666666666666672
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.022222222222222223
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.044444444444444446
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.13333333333333333
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.26666666666666666
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.11180419878864006
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.06644620811287479
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.0861674794441296
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.0
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.022222222222222223
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.13333333333333333
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.15555555555555556
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.0
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.007407407407407407
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.026666666666666665
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.015555555555555557
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.0
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.022222222222222223
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.13333333333333333
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.15555555555555556
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.06402667034388869
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.03574074074074074
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.05715693214212387
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.0
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.022222222222222223
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.08888888888888889
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.13333333333333333
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.0
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.007407407407407407
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.017777777777777778
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.013333333333333332
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.0
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.022222222222222223
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.08888888888888889
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.13333333333333333
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.0528696817100619
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.028518518518518516
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.0469896238659224
name: Cosine Map@100
---
# BGE base Financial Matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [SQAI/bge-embedding-model](https://huggingface.co/SQAI/bge-embedding-model). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [SQAI/bge-embedding-model](https://huggingface.co/SQAI/bge-embedding-model) <!-- at revision 9a9bc3f795ddfc56610a621b37aa077ae0653fa4 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("SQAI/bge-embedding-model3")
# Run inference
sentences = [
'Count of how many times the streetlight has been switched on',
'"What is the total count of times the streetlight has been switched on and what were the ambient light levels, measured in lux, at those instances?"',
'"What is the type of DALI dimming protocol used by our streetlights, do we have the necessary permissions to link multiple of these streetlights for synchronized control, how many times in the past 3 months has the supplied voltage dropped below the safe operating level causing failure, and what is the minimum current level below which the streetlight operation is considered abnormal?"',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.0222 |
| cosine_accuracy@3 | 0.0444 |
| cosine_accuracy@5 | 0.1778 |
| cosine_accuracy@10 | 0.2222 |
| cosine_precision@1 | 0.0222 |
| cosine_precision@3 | 0.0148 |
| cosine_precision@5 | 0.0356 |
| cosine_precision@10 | 0.0222 |
| cosine_recall@1 | 0.0222 |
| cosine_recall@3 | 0.0444 |
| cosine_recall@5 | 0.1778 |
| cosine_recall@10 | 0.2222 |
| cosine_ndcg@10 | 0.1031 |
| cosine_mrr@10 | 0.0667 |
| **cosine_map@100** | **0.0906** |
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.0222 |
| cosine_accuracy@3 | 0.0444 |
| cosine_accuracy@5 | 0.1778 |
| cosine_accuracy@10 | 0.2222 |
| cosine_precision@1 | 0.0222 |
| cosine_precision@3 | 0.0148 |
| cosine_precision@5 | 0.0356 |
| cosine_precision@10 | 0.0222 |
| cosine_recall@1 | 0.0222 |
| cosine_recall@3 | 0.0444 |
| cosine_recall@5 | 0.1778 |
| cosine_recall@10 | 0.2222 |
| cosine_ndcg@10 | 0.1031 |
| cosine_mrr@10 | 0.0667 |
| **cosine_map@100** | **0.0906** |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.0222 |
| cosine_accuracy@3 | 0.0444 |
| cosine_accuracy@5 | 0.1333 |
| cosine_accuracy@10 | 0.2667 |
| cosine_precision@1 | 0.0222 |
| cosine_precision@3 | 0.0148 |
| cosine_precision@5 | 0.0267 |
| cosine_precision@10 | 0.0267 |
| cosine_recall@1 | 0.0222 |
| cosine_recall@3 | 0.0444 |
| cosine_recall@5 | 0.1333 |
| cosine_recall@10 | 0.2667 |
| cosine_ndcg@10 | 0.1118 |
| cosine_mrr@10 | 0.0664 |
| **cosine_map@100** | **0.0862** |
#### Information Retrieval
* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.0 |
| cosine_accuracy@3 | 0.0222 |
| cosine_accuracy@5 | 0.1333 |
| cosine_accuracy@10 | 0.1556 |
| cosine_precision@1 | 0.0 |
| cosine_precision@3 | 0.0074 |
| cosine_precision@5 | 0.0267 |
| cosine_precision@10 | 0.0156 |
| cosine_recall@1 | 0.0 |
| cosine_recall@3 | 0.0222 |
| cosine_recall@5 | 0.1333 |
| cosine_recall@10 | 0.1556 |
| cosine_ndcg@10 | 0.064 |
| cosine_mrr@10 | 0.0357 |
| **cosine_map@100** | **0.0572** |
#### Information Retrieval
* Dataset: `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:----------|
| cosine_accuracy@1 | 0.0 |
| cosine_accuracy@3 | 0.0222 |
| cosine_accuracy@5 | 0.0889 |
| cosine_accuracy@10 | 0.1333 |
| cosine_precision@1 | 0.0 |
| cosine_precision@3 | 0.0074 |
| cosine_precision@5 | 0.0178 |
| cosine_precision@10 | 0.0133 |
| cosine_recall@1 | 0.0 |
| cosine_recall@3 | 0.0222 |
| cosine_recall@5 | 0.0889 |
| cosine_recall@10 | 0.1333 |
| cosine_ndcg@10 | 0.0529 |
| cosine_mrr@10 | 0.0285 |
| **cosine_map@100** | **0.047** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 397 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 13.89 tokens</li><li>max: 20 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 101.84 tokens</li><li>max: 175 tokens</li></ul> |
* Samples:
| positive | anchor |
|:------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Electrical current drawn by the streetlight, measured in amperes</code> | <code>"What is the minimum load current that suggests suboptimal operation, the range of current indicating potential issues, the name of a group of streetlights, their drawn electrical current measure in amperes, details of faults in the link control mechanism, the geographic longitude range they cover, the identifier for their broadcast subscription, their maximum safe voltage under load conditions, minimum abnormal current level, and their linking rights for synchronized control?"</code> |
| <code>Faults in the link control mechanism managing multiple streetlights (failure)</code> | <code>"Can you show me the data of the unique streetlight identifier in the geoZone with faults in the link control mechanism that manages multiple streetlights for the last three months, where the control mode setting of the streetlight was automated, also show the current dimming level of the streetlight in operation at that time, the maximum current level considered unsafe for the streetlight operation, the maximum load current indicating potential risk or overload, along with the time zone where the streetlight is located?"</code> |
| <code>The relay responsible for turning the streetlight on and off is sticking (failure)</code> | <code>"What is the network time synchronization receipt from the central control, maximum load current reading, whether the relay responsible for turning the streetlights on and off experienced any failures, the delta or height of the grid area occupied by this group of streetlights, the maximum load power level, and longitude of the streetlight that had a low power factor indicating inefficiency and possible reactive power issues in the last 3 months?"</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
384,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 45 evaluation samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 14.27 tokens</li><li>max: 19 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 104.58 tokens</li><li>max: 167 tokens</li></ul> |
* Samples:
| positive | anchor |
|:---------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Power factor of the streetlight, indicating the efficiency of power usage</code> | <code>"Could you please provide the information on how many times the streetlight within the set geographical zone has been switched on? Can you also include details on its internal operating temperature, the status of the relay in the streetlight, any instances of it exceeding safe operating temperature limits, and the efficiency of its power usage or power factor, particularly when the operational voltage hit its minimum under load conditions? Lastly, provide insights into the width of the grid area it occupies. This information does not need to highlight any specific failure occurrences."</code> |
| <code>General fault related to the light output of the streetlight (failure)</code> | <code>"What is the X-coordinate for the group of streetlights in a grid layout that have an operational age of more than 10,000 hours, registered a lower lux level below which additional lighting may be necessary and have had general faults related to light output, possibly including those that remain on during daylight hours due to sensor faults, despite the ambient light level detected being adequate? Also, could you provide the records of those which have the rights or permissions to synchronise control across multiple streetlights, the current electrical current drawn by each of these streetlights, the minimum operational voltage under their load conditions, and any instance where they reached the threshold levels to continue recording as flickering?"</code> |
| <code>Name of the street for the streetlight in error (failure)</code> | <code>failure count for street name = Chestnut Street, Oak Avenue for time = last 55 days in streetlighting</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
384,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-06
- `weight_decay`: 0.03
- `num_train_epochs`: 100
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.2
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-06
- `weight_decay`: 0.03
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 100
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.2
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 |
|:--------:|:------:|:-------------:|:----------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:|
| 1.0 | 1 | 8.8958 | 6.8979 | 0.0490 | 0.0530 | 0.0604 | 0.0457 | 0.0604 |
| 1.0769 | 2 | 0.8656 | - | - | - | - | - | - |
| 2.0 | 3 | 7.9734 | 6.8985 | 0.0491 | 0.0528 | 0.0604 | 0.0455 | 0.0604 |
| 2.1538 | 4 | 1.5652 | - | - | - | - | - | - |
| 3.0 | 5 | 7.4072 | 6.8893 | 0.0491 | 0.0532 | 0.0605 | 0.0448 | 0.0605 |
| 3.2308 | 6 | 2.2162 | - | - | - | - | - | - |
| 4.0 | 7 | 6.6659 | 6.8768 | 0.0491 | 0.0535 | 0.0605 | 0.0441 | 0.0605 |
| 4.3077 | 8 | 2.8917 | - | - | - | - | - | - |
| 5.0 | 9 | 6.0505 | 6.8666 | 0.0497 | 0.0539 | 0.0608 | 0.0440 | 0.0608 |
| 5.3846 | 10 | 3.5565 | - | - | - | - | - | - |
| 6.0 | 11 | 5.4886 | 6.8504 | 0.0509 | 0.0554 | 0.0614 | 0.0444 | 0.0614 |
| 6.4615 | 12 | 4.206 | - | - | - | - | - | - |
| 7.0 | 13 | 4.6528 | 6.8280 | 0.0527 | 0.0554 | 0.0621 | 0.0424 | 0.0621 |
| 7.5385 | 14 | 4.87 | - | - | - | - | - | - |
| 8.0 | 15 | 3.9965 | 6.8114 | 0.0529 | 0.0569 | 0.0659 | 0.0416 | 0.0659 |
| 8.6154 | 16 | 5.5005 | - | - | - | - | - | - |
| 9.0 | 17 | 3.2411 | 6.7791 | 0.0532 | 0.0551 | 0.0661 | 0.0421 | 0.0661 |
| 9.6923 | 18 | 6.1103 | - | - | - | - | - | - |
| 10.0 | 19 | 2.6339 | 6.7559 | 0.0575 | 0.0551 | 0.0663 | 0.0425 | 0.0663 |
| 10.7692 | 20 | 6.8119 | - | - | - | - | - | - |
| 11.0 | 21 | 1.9097 | 6.7235 | 0.0572 | 0.0617 | 0.0629 | 0.0439 | 0.0629 |
| 11.8462 | 22 | 7.4981 | - | - | - | - | - | - |
| 12.0 | 23 | 1.2004 | 6.6966 | 0.0571 | 0.0603 | 0.0660 | 0.0437 | 0.0660 |
| 12.9231 | 24 | 8.1134 | - | - | - | - | - | - |
| 13.0 | 25 | 0.5338 | 6.6707 | 0.0557 | 0.0607 | 0.0645 | 0.0428 | 0.0645 |
| 14.0 | 26 | 8.6211 | 6.6592 | 0.0558 | 0.0610 | 0.0651 | 0.0458 | 0.0651 |
| 15.0 | 27 | 8.5679 | - | - | - | - | - | - |
| 15.0769 | 28 | 0.1453 | 6.6335 | 0.0561 | 0.0587 | 0.0731 | 0.0420 | 0.0731 |
| 16.0 | 29 | 8.3531 | - | - | - | - | - | - |
| 16.1538 | 30 | 0.0862 | 6.6090 | 0.0560 | 0.0593 | 0.0725 | 0.0443 | 0.0725 |
| 17.0 | 31 | 8.4562 | 6.6064 | 0.0564 | 0.0616 | 0.0747 | 0.0445 | 0.0747 |
| 17.0769 | 32 | 0.8101 | - | - | - | - | - | - |
| 18.0 | 33 | 7.578 | 6.5912 | 0.0555 | 0.0597 | 0.0730 | 0.0470 | 0.0730 |
| 18.1538 | 34 | 1.4507 | - | - | - | - | - | - |
| 19.0 | 35 | 6.996 | 6.5713 | 0.0560 | 0.0617 | 0.0764 | 0.0474 | 0.0764 |
| 19.2308 | 36 | 2.0412 | - | - | - | - | - | - |
| **20.0** | **37** | **6.4129** | **6.5519** | **0.061** | **0.06** | **0.0731** | **0.0486** | **0.0731** |
| 20.3077 | 38 | 2.6771 | - | - | - | - | - | - |
| 21.0 | 39 | 5.7615 | 6.5315 | 0.0605 | 0.0656 | 0.0722 | 0.0474 | 0.0722 |
| 21.3846 | 40 | 3.2979 | - | - | - | - | - | - |
| 22.0 | 41 | 5.1123 | 6.5186 | 0.0608 | 0.0676 | 0.0721 | 0.0486 | 0.0721 |
| 22.4615 | 42 | 3.9339 | - | - | - | - | - | - |
| 23.0 | 43 | 4.3093 | 6.5020 | 0.0607 | 0.0816 | 0.0626 | 0.0473 | 0.0626 |
| 23.5385 | 44 | 4.6842 | - | - | - | - | - | - |
| 24.0 | 45 | 3.7325 | 6.4972 | 0.0591 | 0.0670 | 0.0648 | 0.0474 | 0.0648 |
| 24.6154 | 46 | 5.1717 | - | - | - | - | - | - |
| 25.0 | 47 | 3.1253 | 6.4821 | 0.0584 | 0.0676 | 0.0648 | 0.0473 | 0.0648 |
| 25.6923 | 48 | 5.7321 | - | - | - | - | - | - |
| 26.0 | 49 | 2.4503 | 6.4660 | 0.0587 | 0.0672 | 0.0694 | 0.0478 | 0.0694 |
| 26.7692 | 50 | 6.5409 | - | - | - | - | - | - |
| 27.0 | 51 | 1.791 | 6.4670 | 0.0590 | 0.0868 | 0.0691 | 0.0478 | 0.0691 |
| 27.8462 | 52 | 7.2565 | - | - | - | - | - | - |
| 28.0 | 53 | 1.0513 | 6.4549 | 0.0590 | 0.0674 | 0.0689 | 0.0478 | 0.0689 |
| 28.9231 | 54 | 7.7128 | - | - | - | - | - | - |
| 29.0 | 55 | 0.4118 | 6.4519 | 0.0592 | 0.0873 | 0.0864 | 0.0477 | 0.0864 |
| 30.0 | 56 | 8.15 | 6.4472 | 0.0589 | 0.0872 | 0.0890 | 0.0477 | 0.0890 |
| 31.0 | 57 | 8.1077 | - | - | - | - | - | - |
| 31.0769 | 58 | 0.0869 | 6.4347 | 0.0596 | 0.0716 | 0.0709 | 0.0477 | 0.0709 |
| 32.0 | 59 | 7.9701 | - | - | - | - | - | - |
| 32.1538 | 60 | 0.1293 | 6.4306 | 0.0580 | 0.0897 | 0.0880 | 0.0474 | 0.0880 |
| 33.0 | 61 | 8.1437 | 6.4254 | 0.0561 | 0.0895 | 0.0889 | 0.0474 | 0.0889 |
| 33.0769 | 62 | 0.7537 | - | - | - | - | - | - |
| 34.0 | 63 | 7.4904 | 6.4215 | 0.0574 | 0.0879 | 0.0885 | 0.0475 | 0.0885 |
| 34.1538 | 64 | 1.3546 | - | - | - | - | - | - |
| 35.0 | 65 | 6.7289 | 6.4152 | 0.0555 | 0.0888 | 0.0701 | 0.0474 | 0.0701 |
| 35.2308 | 66 | 1.9694 | - | - | - | - | - | - |
| 36.0 | 67 | 6.1359 | 6.4122 | 0.0558 | 0.0875 | 0.0707 | 0.0477 | 0.0707 |
| 36.3077 | 68 | 2.5335 | - | - | - | - | - | - |
| 37.0 | 69 | 5.5274 | 6.4068 | 0.0558 | 0.0694 | 0.0718 | 0.0470 | 0.0718 |
| 37.3846 | 70 | 3.1245 | - | - | - | - | - | - |
| 38.0 | 71 | 4.8913 | 6.4102 | 0.0537 | 0.0701 | 0.0718 | 0.0470 | 0.0718 |
| 38.4615 | 72 | 3.8402 | - | - | - | - | - | - |
| 39.0 | 73 | 4.12 | 6.4003 | 0.0537 | 0.0681 | 0.0719 | 0.0471 | 0.0719 |
| 39.5385 | 74 | 4.5087 | - | - | - | - | - | - |
| 40.0 | 75 | 3.6038 | 6.3984 | 0.0575 | 0.0878 | 0.0890 | 0.0471 | 0.0890 |
| 40.6154 | 76 | 5.1065 | - | - | - | - | - | - |
| 41.0 | 77 | 3.042 | 6.3966 | 0.0577 | 0.0684 | 0.0715 | 0.0469 | 0.0715 |
| 41.6923 | 78 | 5.6277 | - | - | - | - | - | - |
| 42.0 | 79 | 2.3259 | 6.3975 | 0.0568 | 0.0681 | 0.0722 | 0.0470 | 0.0722 |
| 42.7692 | 80 | 6.2878 | - | - | - | - | - | - |
| 43.0 | 81 | 1.8184 | 6.3983 | 0.0571 | 0.0680 | 0.0728 | 0.0470 | 0.0728 |
| 43.8462 | 82 | 6.9261 | - | - | - | - | - | - |
| 44.0 | 83 | 1.0775 | 6.3942 | 0.0580 | 0.0680 | 0.0896 | 0.0469 | 0.0896 |
| 44.9231 | 84 | 7.523 | - | - | - | - | - | - |
| 45.0 | 85 | 0.3703 | 6.3935 | 0.0573 | 0.0861 | 0.0911 | 0.0468 | 0.0911 |
| 46.0 | 86 | 7.945 | 6.3961 | 0.0555 | 0.0855 | 0.0898 | 0.0468 | 0.0898 |
| 47.0 | 87 | 7.8871 | - | - | - | - | - | - |
| 47.0769 | 88 | 0.1237 | 6.3924 | 0.0556 | 0.0858 | 0.0902 | 0.0467 | 0.0902 |
| 48.0 | 89 | 7.9213 | - | - | - | - | - | - |
| 48.1538 | 90 | 0.0809 | 6.3922 | 0.0600 | 0.0684 | 0.0727 | 0.0467 | 0.0727 |
| 49.0 | 91 | 7.7954 | 6.3901 | 0.0601 | 0.0858 | 0.0907 | 0.0467 | 0.0907 |
| 49.0769 | 92 | 0.7928 | - | - | - | - | - | - |
| 50.0 | 93 | 7.3085 | 6.3915 | 0.0567 | 0.0681 | 0.0909 | 0.0471 | 0.0909 |
| 50.1538 | 94 | 1.3327 | - | - | - | - | - | - |
| 51.0 | 95 | 6.7179 | 6.3970 | 0.0597 | 0.0677 | 0.0725 | 0.0466 | 0.0725 |
| 51.2308 | 96 | 1.9239 | - | - | - | - | - | - |
| 52.0 | 97 | 6.0889 | 6.3939 | 0.0567 | 0.0858 | 0.0907 | 0.0470 | 0.0907 |
| 52.3077 | 98 | 2.5265 | - | - | - | - | - | - |
| 53.0 | 99 | 5.4464 | 6.3943 | 0.0539 | 0.0677 | 0.0732 | 0.0466 | 0.0732 |
| 53.3846 | 100 | 3.0337 | 6.3940 | 0.0572 | 0.0862 | 0.0906 | 0.0470 | 0.0906 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.31.0
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
de-coder/google-bert-bert-base-uncased-gguf | de-coder | 2024-07-02T08:19:25Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T08:19:25Z | Entry not found |
Ryan-Pham/route_2 | Ryan-Pham | 2024-07-02T08:19:26Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T08:19:26Z | Entry not found |
abdfajar707/llama3_8B_lora_model_rkp_pn2025_v2 | abdfajar707 | 2024-07-02T08:20:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T08:20:30Z | ---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** abdfajar707
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
bikeZero/my_awesome_model | bikeZero | 2024-07-02T08:20:31Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T08:20:31Z | Entry not found |
PereLluis13/relik-reader-deberta-large-wikipedia-aida-full-interleave | PereLluis13 | 2024-07-02T08:21:25Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"relik-reader",
"feature-extraction",
"custom_code",
"region:us"
] | feature-extraction | 2024-07-02T08:20:32Z | Entry not found |
PereLluis13/relik-entity-linking-large-wikipedia-aida-interleave | PereLluis13 | 2024-07-02T08:22:23Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T08:21:40Z | Entry not found |
jeromesky/prosodic_accuracy_v2 | jeromesky | 2024-07-02T08:58:23Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"audio-classification",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-07-02T08:21:58Z | Entry not found |
Thesiss/skin_llava | Thesiss | 2024-07-02T13:06:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llava_mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-02T08:22:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SayedNabeel/Shorts_Writer_LLAMA3_8B_V0.2 | SayedNabeel | 2024-07-02T09:18:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"Story",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T08:23:57Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- Story
---
# Uploaded model
- **Developed by:** SayedNabeel
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# Story Generation
- **It can be used to Genertate YouTube Shorts**
- **It is trained on 780 Steps on 6.1K rows of data** |
omartariq612/whisper-small-with-tajweed-tokens | omartariq612 | 2024-07-02T08:42:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"arxiv:2212.04356",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-07-02T08:25:17Z | ---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- no
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: whisper-small
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 3.432213777886737
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 7.628304527060248
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: hi
split: test
args:
language: hi
metrics:
- name: Test WER
type: wer
value: 87.3
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13.0
type: mozilla-foundation/common_voice_13_0
config: dv
split: test
args:
language: dv
metrics:
- name: Wer
type: wer
value: 125.69809089960707
pipeline_tag: automatic-speech-recognition
license: apache-2.0
---
# Whisper
> This is the vanilla whisper small model with tajweed tokens, here is the [notebook](https://www.kaggle.com/code/omartariq612/whisper-small-with-tajweed-tokens/notebook) that generated that repo
Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours
of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains **without** the need
for fine-tuning.
Whisper was proposed in the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356)
by Alec Radford et al from OpenAI. The original code repository can be found [here](https://github.com/openai/whisper).
**Disclaimer**: Content for this model card has partly been written by the Hugging Face team, and parts of it were
copied and pasted from the original model card.
## Model details
Whisper is a Transformer based encoder-decoder model, also referred to as a _sequence-to-sequence_ model.
It was trained on 680k hours of labelled speech data annotated using large-scale weak supervision.
The models were trained on either English-only data or multilingual data. The English-only models were trained
on the task of speech recognition. The multilingual models were trained on both speech recognition and speech
translation. For speech recognition, the model predicts transcriptions in the *same* language as the audio.
For speech translation, the model predicts transcriptions to a *different* language to the audio.
Whisper checkpoints come in five configurations of varying model sizes.
The smallest four are trained on either English-only or multilingual data.
The largest checkpoints are multilingual only. All ten of the pre-trained checkpoints
are available on the [Hugging Face Hub](https://huggingface.co/models?search=openai/whisper). The
checkpoints are summarised in the following table with links to the models on the Hub:
| Size | Parameters | English-only | Multilingual |
|----------|------------|------------------------------------------------------|-----------------------------------------------------|
| tiny | 39 M | [✓](https://huggingface.co/openai/whisper-tiny.en) | [✓](https://huggingface.co/openai/whisper-tiny) |
| base | 74 M | [✓](https://huggingface.co/openai/whisper-base.en) | [✓](https://huggingface.co/openai/whisper-base) |
| small | 244 M | [✓](https://huggingface.co/openai/whisper-small.en) | [✓](https://huggingface.co/openai/whisper-small) |
| medium | 769 M | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) |
| large | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large) |
| large-v2 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large-v2) |
# Usage
To transcribe audio samples, the model has to be used alongside a [`WhisperProcessor`](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperProcessor).
The `WhisperProcessor` is used to:
1. Pre-process the audio inputs (converting them to log-Mel spectrograms for the model)
2. Post-process the model outputs (converting them from tokens to text)
The model is informed of which task to perform (transcription or translation) by passing the appropriate "context tokens". These context tokens
are a sequence of tokens that are given to the decoder at the start of the decoding process, and take the following order:
1. The transcription always starts with the `<|startoftranscript|>` token
2. The second token is the language token (e.g. `<|en|>` for English)
3. The third token is the "task token". It can take one of two values: `<|transcribe|>` for speech recognition or `<|translate|>` for speech translation
4. In addition, a `<|notimestamps|>` token is added if the model should not include timestamp prediction
Thus, a typical sequence of context tokens might look as follows:
```
<|startoftranscript|> <|en|> <|transcribe|> <|notimestamps|>
```
Which tells the model to decode in English, under the task of speech recognition, and not to predict timestamps.
These tokens can either be forced or un-forced. If they are forced, the model is made to predict each token at
each position. This allows one to control the output language and task for the Whisper model. If they are un-forced,
the Whisper model will automatically predict the output langauge and task itself.
The context tokens can be set accordingly:
```python
model.config.forced_decoder_ids = WhisperProcessor.get_decoder_prompt_ids(language="english", task="transcribe")
```
Which forces the model to predict in English under the task of speech recognition.
## Transcription
### English to English
In this example, the context tokens are 'unforced', meaning the model automatically predicts the output language
(English) and task (transcribe).
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-small")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small")
>>> model.config.forced_decoder_ids = None
>>> # load dummy dataset and read audio files
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> input_features = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False)
['<|startoftranscript|><|en|><|transcribe|><|notimestamps|> Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.<|endoftext|>']
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.']
```
The context tokens can be removed from the start of the transcription by setting `skip_special_tokens=True`.
### French to French
The following example demonstrates French to French transcription by setting the decoder ids appropriately.
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import Audio, load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-small")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small")
>>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="transcribe")
>>> # load streaming dataset and read first audio sample
>>> ds = load_dataset("common_voice", "fr", split="test", streaming=True)
>>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
>>> input_speech = next(iter(ds))["audio"]
>>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids)
['<|startoftranscript|><|fr|><|transcribe|><|notimestamps|> Un vrai travail intéressant va enfin être mené sur ce sujet.<|endoftext|>']
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' Un vrai travail intéressant va enfin être mené sur ce sujet.']
```
## Translation
Setting the task to "translate" forces the Whisper model to perform speech translation.
### French to English
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import Audio, load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-small")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small")
>>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="translate")
>>> # load streaming dataset and read first audio sample
>>> ds = load_dataset("common_voice", "fr", split="test", streaming=True)
>>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
>>> input_speech = next(iter(ds))["audio"]
>>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' A very interesting work, we will finally be given on this subject.']
```
## Evaluation
This code snippet shows how to evaluate Whisper Small on [LibriSpeech test-clean](https://huggingface.co/datasets/librispeech_asr):
```python
>>> from datasets import load_dataset
>>> from transformers import WhisperForConditionalGeneration, WhisperProcessor
>>> import torch
>>> from evaluate import load
>>> librispeech_test_clean = load_dataset("librispeech_asr", "clean", split="test")
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-small")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small").to("cuda")
>>> def map_to_pred(batch):
>>> audio = batch["audio"]
>>> input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features
>>> batch["reference"] = processor.tokenizer._normalize(batch['text'])
>>>
>>> with torch.no_grad():
>>> predicted_ids = model.generate(input_features.to("cuda"))[0]
>>> transcription = processor.decode(predicted_ids)
>>> batch["prediction"] = processor.tokenizer._normalize(transcription)
>>> return batch
>>> result = librispeech_test_clean.map(map_to_pred)
>>> wer = load("wer")
>>> print(100 * wer.compute(references=result["reference"], predictions=result["prediction"]))
3.432213777886737
```
## Long-Form Transcription
The Whisper model is intrinsically designed to work on audio samples of up to 30s in duration. However, by using a chunking
algorithm, it can be used to transcribe audio samples of up to arbitrary length. This is possible through Transformers
[`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
method. Chunking is enabled by setting `chunk_length_s=30` when instantiating the pipeline. With chunking enabled, the pipeline
can be run with batched inference. It can also be extended to predict sequence level timestamps by passing `return_timestamps=True`:
```python
>>> import torch
>>> from transformers import pipeline
>>> from datasets import load_dataset
>>> device = "cuda:0" if torch.cuda.is_available() else "cpu"
>>> pipe = pipeline(
>>> "automatic-speech-recognition",
>>> model="openai/whisper-small",
>>> chunk_length_s=30,
>>> device=device,
>>> )
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> prediction = pipe(sample.copy(), batch_size=8)["text"]
" Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel."
>>> # we can also return timestamps for the predictions
>>> prediction = pipe(sample.copy(), batch_size=8, return_timestamps=True)["chunks"]
[{'text': ' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.',
'timestamp': (0.0, 5.44)}]
```
Refer to the blog post [ASR Chunking](https://huggingface.co/blog/asr-chunking) for more details on the chunking algorithm.
## Fine-Tuning
The pre-trained Whisper model demonstrates a strong ability to generalise to different datasets and domains. However,
its predictive capabilities can be improved further for certain languages and tasks through *fine-tuning*. The blog
post [Fine-Tune Whisper with 🤗 Transformers](https://huggingface.co/blog/fine-tune-whisper) provides a step-by-step
guide to fine-tuning the Whisper model with as little as 5 hours of labelled data.
### Evaluated Use
The primary intended users of these models are AI researchers studying robustness, generalization, capabilities, biases, and constraints of the current model. However, Whisper is also potentially quite useful as an ASR solution for developers, especially for English speech recognition. We recognize that once models are released, it is impossible to restrict access to only “intended” uses or to draw reasonable guidelines around what is or is not research.
The models are primarily trained and evaluated on ASR and speech translation to English tasks. They show strong ASR results in ~10 languages. They may exhibit additional capabilities, particularly if fine-tuned on certain tasks like voice activity detection, speaker classification, or speaker diarization but have not been robustly evaluated in these areas. We strongly recommend that users perform robust evaluations of the models in a particular context and domain before deploying them.
In particular, we caution against using Whisper models to transcribe recordings of individuals taken without their consent or purporting to use these models for any kind of subjective classification. We recommend against use in high-risk domains like decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes. The models are intended to transcribe and translate speech, use of the model for classification is not only not evaluated but also not appropriate, particularly to infer human attributes.
## Training Data
The models are trained on 680,000 hours of audio and the corresponding transcripts collected from the internet. 65% of this data (or 438,000 hours) represents English-language audio and matched English transcripts, roughly 18% (or 126,000 hours) represents non-English audio and English transcripts, while the final 17% (or 117,000 hours) represents non-English audio and the corresponding transcript. This non-English data represents 98 different languages.
As discussed in [the accompanying paper](https://cdn.openai.com/papers/whisper.pdf), we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language.
## Performance and Limitations
Our studies show that, over many existing ASR systems, the models exhibit improved robustness to accents, background noise, technical language, as well as zero shot translation from multiple languages into English; and that accuracy on speech recognition and translation is near the state-of-the-art level.
However, because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself.
Our models perform unevenly across languages, and we observe lower accuracy on low-resource and/or low-discoverability languages or languages where we have less training data. The models also exhibit disparate performance on different accents and dialects of particular languages, which may include higher word error rate across speakers of different genders, races, ages, or other demographic criteria. Our full evaluation results are presented in [the paper accompanying this release](https://cdn.openai.com/papers/whisper.pdf).
In addition, the sequence-to-sequence architecture of the model makes it prone to generating repetitive texts, which can be mitigated to some degree by beam search and temperature scheduling but not perfectly. Further analysis on these limitations are provided in [the paper](https://cdn.openai.com/papers/whisper.pdf). It is likely that this behavior and hallucinations may be worse on lower-resource and/or lower-discoverability languages.
## Broader Implications
We anticipate that Whisper models’ transcription capabilities may be used for improving accessibility tools. While Whisper models cannot be used for real-time transcription out of the box – their speed and size suggest that others may be able to build applications on top of them that allow for near-real-time speech recognition and translation. The real value of beneficial applications built on top of Whisper models suggests that the disparate performance of these models may have real economic implications.
There are also potential dual use concerns that come with releasing Whisper. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects.
### BibTeX entry and citation info
```bibtex
@misc{radford2022whisper,
doi = {10.48550/ARXIV.2212.04356},
url = {https://arxiv.org/abs/2212.04356},
author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
title = {Robust Speech Recognition via Large-Scale Weak Supervision},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
|
apwic/summarization-lora-0 | apwic | 2024-07-02T12:00:03Z | 0 | 0 | null | [
"generated_from_trainer",
"id",
"base_model:LazarusNLP/IndoNanoT5-base",
"license:apache-2.0",
"region:us"
] | null | 2024-07-02T08:25:33Z | ---
language:
- id
license: apache-2.0
base_model: LazarusNLP/IndoNanoT5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: summarization-lora-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# summarization-lora-0
This model is a fine-tuned version of [LazarusNLP/IndoNanoT5-base](https://huggingface.co/LazarusNLP/IndoNanoT5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5656
- Rouge1: 0.4188
- Rouge2: 0.0
- Rougel: 0.4161
- Rougelsum: 0.4157
- Gen Len: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 1.2271 | 1.0 | 1783 | 0.6275 | 0.491 | 0.0 | 0.4864 | 0.4859 | 1.0 |
| 0.7893 | 2.0 | 3566 | 0.5955 | 0.4382 | 0.0 | 0.4358 | 0.4345 | 1.0 |
| 0.7347 | 3.0 | 5349 | 0.5738 | 0.4461 | 0.0 | 0.4432 | 0.4417 | 1.0 |
| 0.7084 | 4.0 | 7132 | 0.5618 | 0.4416 | 0.0 | 0.4409 | 0.4389 | 1.0 |
| 0.6976 | 5.0 | 8915 | 0.5656 | 0.4188 | 0.0 | 0.4161 | 0.4157 | 1.0 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
chienweichang/Llama-3-Taiwan-8B-Instruct-128K-GGUF | chienweichang | 2024-07-03T01:31:29Z | 0 | 0 | null | [
"gguf",
"zhtw",
"text-generation",
"zh",
"en",
"arxiv:2403.20180",
"arxiv:2311.17487",
"base_model:yentinglin/Llama-3-Taiwan-8B-Instruct-128k",
"license:llama3",
"region:us"
] | text-generation | 2024-07-02T08:25:36Z | Temporary Redirect. Redirecting to /chienweichang/Llama-3-Taiwan-8B-Instruct-128k-GGUF/resolve/main/README.md |
camillop/phi-mini-company-classification-gguf-q4m | camillop | 2024-07-02T08:27:39Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T08:25:40Z | ---
base_model: unsloth/phi-3-mini-4k-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
---
# Uploaded model
- **Developed by:** camillop
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Dev372/HarshDev-whisper-tiny-English_2000 | Dev372 | 2024-07-02T08:25:50Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T08:25:50Z | Entry not found |
vive0921/dog-sdxl | vive0921 | 2024-07-02T08:26:19Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T08:26:19Z | Entry not found |
Ram07/sri | Ram07 | 2024-07-02T20:20:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-02T08:27:20Z | ---
license: mit
---
|
baxtos/bartik01-4 | baxtos | 2024-07-02T08:30:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-02T08:27:40Z | Entry not found |
AlexWortega/bertoid | AlexWortega | 2024-07-02T08:32:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | feature-extraction | 2024-07-02T08:29:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
loginworks/Meta-Llama-3-8B-code | loginworks | 2024-07-02T13:30:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-02T08:30:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
philnet/plantynet-mt5-en2kr | philnet | 2024-07-02T08:36:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | 2024-07-02T08:30:59Z | Entry not found |
whizzzzkid/whizzzzkid_396_5 | whizzzzkid | 2024-07-02T08:32:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-02T08:31:59Z | Entry not found |
nourheshamshaheen/llava_FEDPROX_8epochs_2000steps_2clients | nourheshamshaheen | 2024-07-02T08:46:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llava_llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-02T08:32:04Z | Entry not found |
hflqf88888/OdysseyAgent-device | hflqf88888 | 2024-07-02T09:20:36Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"qwen",
"text-generation",
"GUI",
"custom_code",
"en",
"zh",
"dataset:OpenGVLab/GUI-Odyssey",
"license:cc-by-4.0",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-07-02T08:32:25Z | ---
license: cc-by-4.0
datasets:
- OpenGVLab/GUI-Odyssey
language:
- en
- zh
tags:
- GUI
---
## OdysseyAgent-device
The OdysseyAgent fine-tuned on Train-Device split. |
whizzzzkid/whizzzzkid_397_3 | whizzzzkid | 2024-07-02T08:33:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-02T08:32:59Z | Entry not found |
baxtos/bartik02-4 | baxtos | 2024-07-02T08:35:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-02T08:33:16Z | Entry not found |
whizzzzkid/whizzzzkid_398_4 | whizzzzkid | 2024-07-02T08:34:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-02T08:33:58Z | Entry not found |
LarryAIDraw/ChamHarmoniePonyXL | LarryAIDraw | 2024-07-02T08:41:24Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-07-02T08:34:04Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/543457/harmonie-3-outfits-or-arknights-or-pony-xl |
whizzzzkid/whizzzzkid_399_1 | whizzzzkid | 2024-07-02T08:35:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-02T08:34:59Z | Entry not found |
henrik-dra/paligemma-ft-energymeter | henrik-dra | 2024-07-02T14:56:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T08:35:03Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
infinitymatter/llama-3-8b-chat_army | infinitymatter | 2024-07-02T08:36:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T08:35:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
whizzzzkid/whizzzzkid_400_7 | whizzzzkid | 2024-07-02T08:36:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-02T08:36:10Z | Entry not found |
whizzzzkid/whizzzzkid_401_6 | whizzzzkid | 2024-07-02T08:37:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-02T08:37:11Z | Entry not found |
femiari/Qwen1.5-7B-Moe | femiari | 2024-07-02T08:47:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_moe",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-02T08:38:41Z | Entry not found |
ZeroWw/glm-4-9b-chat-GGUF | ZeroWw | 2024-07-02T09:27:44Z | 0 | 0 | null | [
"gguf",
"en",
"license:mit",
"region:us"
] | null | 2024-07-02T08:38:49Z |
---
license: mit
language:
- en
---
My own (ZeroWw) quantizations.
output and embed tensors quantized to f16.
all other tensors quantized to q5_k or q6_k.
Result:
both f16.q6 and f16.q5 are smaller than q8_0 standard quantization
and they perform as well as the pure f16.
|
tamilanda/efedf | tamilanda | 2024-07-02T08:38:59Z | 0 | 0 | null | [
"license:gpl-2.0",
"region:us"
] | null | 2024-07-02T08:38:59Z | ---
license: gpl-2.0
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.