modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-24 12:28:46
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 493
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-24 12:27:57
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
lsmille/lora_evo_ta_all_layers_2 | lsmille | 2024-05-28T19:10:07Z | 6 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:togethercomputer/evo-1-8k-base",
"base_model:adapter:togethercomputer/evo-1-8k-base",
"license:apache-2.0",
"region:us"
] | null | 2024-05-28T04:19:32Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: togethercomputer/evo-1-8k-base
model-index:
- name: lora_evo_ta_all_layers_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lora_evo_ta_all_layers_2
This model is a fine-tuned version of [togethercomputer/evo-1-8k-base](https://huggingface.co/togethercomputer/evo-1-8k-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1660
## Model description
lora_alpha = 32
lora_dropout = 0.05
lora_r = 16
epochs = 9 <---------------
learning rate = 3e-4
warmup_steps=0.5
gradient_accumulation_steps = 8
train_batch = 1
eval_batch = 1
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.5
- num_epochs: 9
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.0681 | 0.9925 | 33 | 2.9815 |
| 2.9165 | 1.9850 | 66 | 2.9530 |
| 2.8091 | 2.9774 | 99 | 2.9446 |
| 2.6361 | 4.0 | 133 | 2.9406 |
| 2.6312 | 4.9925 | 166 | 2.9409 |
| 2.57 | 5.9850 | 199 | 2.9978 |
| 2.5215 | 6.9774 | 232 | 3.0450 |
| 2.4107 | 8.0 | 266 | 3.0763 |
| 2.4272 | 8.9323 | 297 | 3.1660 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
lsmille/lora_evo_ta_all_layers_3 | lsmille | 2024-05-28T19:09:17Z | 3 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:togethercomputer/evo-1-8k-base",
"base_model:adapter:togethercomputer/evo-1-8k-base",
"license:apache-2.0",
"region:us"
] | null | 2024-05-28T05:08:39Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: togethercomputer/evo-1-8k-base
model-index:
- name: lora_evo_ta_all_layers_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lora_evo_ta_all_layers_3
This model is a fine-tuned version of [togethercomputer/evo-1-8k-base](https://huggingface.co/togethercomputer/evo-1-8k-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9530
## Model description
lora_alpha = 16 <--------
lora_dropout = 0.05
lora_r = 8 <--------
epochs = 3
learning rate = 3e-4
warmup_steps=0.5
gradient_accumulation_steps = 8
train_batch = 1
eval_batch = 1
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.5
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.0867 | 0.9925 | 33 | 3.0207 |
| 2.9359 | 1.9850 | 66 | 2.9592 |
| 2.7604 | 2.9774 | 99 | 2.9530 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
amiguel/lightining_studio | amiguel | 2024-05-28T19:01:17Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"medical",
"text-classification",
"dataset:HuggingFaceFW/fineweb",
"license:apache-2.0",
"region:us"
] | text-classification | 2024-05-22T06:09:31Z | ---
license: apache-2.0
datasets:
- HuggingFaceFW/fineweb
metrics:
- accuracy
library_name: adapter-transformers
pipeline_tag: text-classification
tags:
- medical
--- |
dtorber/BioNLP-conditional-tokens-decoder-eLife | dtorber | 2024-05-28T18:59:39Z | 97 | 0 | transformers | [
"transformers",
"safetensors",
"led",
"text2text-generation",
"summarization",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2024-05-28T10:54:52Z | ---
tags:
- summarization
- generated_from_trainer
model-index:
- name: BioNLP-conditional-tokens-decoder-eLife
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BioNLP-conditional-tokens-decoder-eLife
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.3739167643078955e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 1.13.1+cu117
- Datasets 2.16.1
- Tokenizers 0.15.2
|
marian-nmt/bleurt-20 | marian-nmt | 2024-05-28T18:58:21Z | 0 | 0 | null | [
"region:us"
] | null | 2024-05-28T18:24:56Z | #BLEURT-20
This repository hosts checkpoints compatible with Marian NMT.
|
phongtintruong/misjava-api-052924 | phongtintruong | 2024-05-28T18:56:47Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-28T18:26:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
luciorramos/llm_tcc_sp90_ep90_ds1000 | luciorramos | 2024-05-28T18:56:46Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-28T18:49:35Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** luciorramos
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-20151707 | fine-tuned | 2024-05-28T18:55:56Z | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"dataset:fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-20151707",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-28T18:55:25Z | ---
license: apache-2.0
datasets:
- fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-20151707
- allenai/c4
language:
- en
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case:
None
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-20151707',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-34917964 | fine-tuned | 2024-05-28T18:55:41Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"dataset:fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-34917964",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-28T18:55:04Z | ---
license: apache-2.0
datasets:
- fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-34917964
- allenai/c4
language:
- en
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case:
None
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-34917964',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
mago18/donut-demo | mago18 | 2024-05-28T18:55:34Z | 49 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-05-28T18:55:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-60453771 | fine-tuned | 2024-05-28T18:55:34Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"dataset:fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-60453771",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-28T18:55:03Z | ---
license: apache-2.0
datasets:
- fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-60453771
- allenai/c4
language:
- en
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case:
None
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-60453771',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
fine-tuned/before-finetuning-512-192-gpt-4o-2024-05-13-10552781 | fine-tuned | 2024-05-28T18:55:10Z | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"dataset:fine-tuned/before-finetuning-512-192-gpt-4o-2024-05-13-10552781",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-28T18:54:33Z | ---
license: apache-2.0
datasets:
- fine-tuned/before-finetuning-512-192-gpt-4o-2024-05-13-10552781
- allenai/c4
language:
- en
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case:
None
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/before-finetuning-512-192-gpt-4o-2024-05-13-10552781',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-89836585 | fine-tuned | 2024-05-28T18:54:24Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"dataset:fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-89836585",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-28T18:53:55Z | ---
license: apache-2.0
datasets:
- fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-89836585
- allenai/c4
language:
- en
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case:
None
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-89836585',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
fine-tuned/before-finetuning-512-192-gpt-4o-2024-05-13-83930416 | fine-tuned | 2024-05-28T18:53:40Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"dataset:fine-tuned/before-finetuning-512-192-gpt-4o-2024-05-13-83930416",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-28T18:53:06Z | ---
license: apache-2.0
datasets:
- fine-tuned/before-finetuning-512-192-gpt-4o-2024-05-13-83930416
- allenai/c4
language:
- en
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case:
None
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/before-finetuning-512-192-gpt-4o-2024-05-13-83930416',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
fine-tuned/NFCorpus-512-192-gpt-4o-2024-05-13-89953157 | fine-tuned | 2024-05-28T18:52:41Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"dataset:fine-tuned/NFCorpus-512-192-gpt-4o-2024-05-13-89953157",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-28T18:52:06Z | ---
license: apache-2.0
datasets:
- fine-tuned/NFCorpus-512-192-gpt-4o-2024-05-13-89953157
- allenai/c4
language:
- en
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case:
None
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/NFCorpus-512-192-gpt-4o-2024-05-13-89953157',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
fine-tuned/SCIDOCS-512-192-gpt-4o-2024-05-13-93651135 | fine-tuned | 2024-05-28T18:52:08Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"dataset:fine-tuned/SCIDOCS-512-192-gpt-4o-2024-05-13-93651135",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-28T18:51:30Z | ---
license: apache-2.0
datasets:
- fine-tuned/SCIDOCS-512-192-gpt-4o-2024-05-13-93651135
- allenai/c4
language:
- en
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case:
None
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/SCIDOCS-512-192-gpt-4o-2024-05-13-93651135',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
odicem/tinyllama-cleantech-v1 | odicem | 2024-05-28T18:50:08Z | 136 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-28T18:47:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BehradG/vit-base-patch16-224-in21k-finetuned-lora-food101 | BehradG | 2024-05-28T18:47:23Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-28T18:04:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ds28/llama2-causal | ds28 | 2024-05-28T18:47:20Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-28T14:40:19Z | ---
license: apache-2.0
---
|
dtorber/BioNLP-conditional-tokens-encoder-eLife | dtorber | 2024-05-28T18:47:18Z | 97 | 0 | transformers | [
"transformers",
"safetensors",
"led",
"text2text-generation",
"summarization",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2024-05-28T10:44:20Z | ---
tags:
- summarization
- generated_from_trainer
model-index:
- name: BioNLP-conditional-tokens-encoder-eLife
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BioNLP-conditional-tokens-encoder-eLife
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.3739167643078955e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 1.13.1+cu117
- Datasets 2.16.1
- Tokenizers 0.15.2
|
MrezaPRZ/codellama_synthetic_create_context_bigquery | MrezaPRZ | 2024-05-28T18:32:29Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-28T18:29:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SecondNan/ppo-LunaLander-v2 | SecondNan | 2024-05-28T18:31:38Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-28T18:31:19Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 254.38 +/- 19.59
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
lmbelo/Phi-3-mini-4k-instruct | lmbelo | 2024-05-28T18:28:35Z | 6 | 0 | mlx | [
"mlx",
"safetensors",
"phi3",
"nlp",
"code",
"text-generation",
"conversational",
"custom_code",
"en",
"license:mit",
"region:us"
] | text-generation | 2024-05-27T11:50:17Z | ---
language:
- en
license: mit
tags:
- nlp
- code
- mlx
license_link: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE
pipeline_tag: text-generation
inference:
parameters:
temperature: 0.0
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
---
# lmbelo/Phi-3-mini-4k-instruct
The Model [lmbelo/Phi-3-mini-4k-instruct](https://huggingface.co/lmbelo/Phi-3-mini-4k-instruct) was converted to MLX format from [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) using mlx-lm version **0.13.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("lmbelo/Phi-3-mini-4k-instruct")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
irenepap/results | irenepap | 2024-05-28T18:28:32Z | 183 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-28T18:28:19Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2919
- Precision: 0.8957
- Recall: 0.8226
- F1: 0.8576
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| 0.3611 | 0.2 | 500 | 0.3194 | 0.8640 | 0.8324 | 0.8479 |
| 0.3106 | 0.4 | 1000 | 0.3039 | 0.8905 | 0.8013 | 0.8435 |
| 0.3027 | 0.6 | 1500 | 0.2954 | 0.9022 | 0.7927 | 0.8439 |
| 0.2952 | 0.81 | 2000 | 0.2864 | 0.8966 | 0.8185 | 0.8558 |
| 0.2905 | 1.01 | 2500 | 0.2875 | 0.8973 | 0.8150 | 0.8542 |
| 0.2605 | 1.21 | 3000 | 0.2841 | 0.8924 | 0.8369 | 0.8637 |
| 0.2591 | 1.41 | 3500 | 0.2820 | 0.8926 | 0.8444 | 0.8678 |
| 0.2574 | 1.61 | 4000 | 0.2826 | 0.8916 | 0.8359 | 0.8629 |
| 0.2602 | 1.81 | 4500 | 0.2764 | 0.8989 | 0.8291 | 0.8626 |
| 0.2561 | 2.01 | 5000 | 0.2813 | 0.8891 | 0.8454 | 0.8667 |
| 0.2195 | 2.22 | 5500 | 0.2869 | 0.9072 | 0.8110 | 0.8564 |
| 0.2209 | 2.42 | 6000 | 0.2845 | 0.9002 | 0.8216 | 0.8591 |
| 0.2178 | 2.62 | 6500 | 0.2827 | 0.8991 | 0.8285 | 0.8624 |
| 0.22 | 2.82 | 7000 | 0.2919 | 0.8957 | 0.8226 | 0.8576 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
isaacchung/QwenPhi-7B-slerp | isaacchung | 2024-05-28T18:26:50Z | 0 | 0 | null | [
"merge",
"mergekit",
"lazymergekit",
"Qwen/Qwen1.5-7B-Chat",
"microsoft/Phi-3-mini-128k-instruct",
"base_model:Qwen/Qwen1.5-7B-Chat",
"base_model:merge:Qwen/Qwen1.5-7B-Chat",
"base_model:microsoft/Phi-3-mini-128k-instruct",
"base_model:merge:microsoft/Phi-3-mini-128k-instruct",
"region:us"
] | null | 2024-05-28T18:26:49Z | ---
tags:
- merge
- mergekit
- lazymergekit
- Qwen/Qwen1.5-7B-Chat
- microsoft/Phi-3-mini-128k-instruct
base_model:
- Qwen/Qwen1.5-7B-Chat
- microsoft/Phi-3-mini-128k-instruct
---
# QwenPhi-7B-slerp
QwenPhi-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Qwen/Qwen1.5-7B-Chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat)
* [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct)
## π§© Configuration
```yaml
slices:
- sources:
- model: Qwen/Qwen1.5-7B-Chat
layer_range: [0, 32]
- model: microsoft/Phi-3-mini-128k-instruct
layer_range: [0, 32]
merge_method: slerp
base_model: microsoft/Phi-3-mini-128k-instruct
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## π» Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "isaacchung/QwenPhi-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
imdatta0/meta_llama_3_MetaMathQA_40K_ortho | imdatta0 | 2024-05-28T18:21:37Z | 5 | 0 | peft | [
"peft",
"safetensors",
"unsloth",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:adapter:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"region:us"
] | null | 2024-05-28T18:21:33Z | ---
license: llama3
library_name: peft
tags:
- unsloth
- generated_from_trainer
base_model: meta-llama/Meta-Llama-3-8B
model-index:
- name: meta_llama_3_MetaMathQA_40K_ortho
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# meta_llama_3_MetaMathQA_40K_ortho
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5219
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 0.02
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8807 | 0.0211 | 13 | 0.6706 |
| 0.6201 | 0.0421 | 26 | 0.6389 |
| 0.605 | 0.0632 | 39 | 0.6211 |
| 0.5929 | 0.0842 | 52 | 0.6119 |
| 0.5555 | 0.1053 | 65 | 0.6045 |
| 0.5689 | 0.1264 | 78 | 0.5980 |
| 0.5767 | 0.1474 | 91 | 0.5914 |
| 0.5584 | 0.1685 | 104 | 0.5886 |
| 0.5411 | 0.1896 | 117 | 0.5847 |
| 0.5417 | 0.2106 | 130 | 0.5829 |
| 0.5388 | 0.2317 | 143 | 0.5787 |
| 0.5473 | 0.2527 | 156 | 0.5748 |
| 0.5432 | 0.2738 | 169 | 0.5701 |
| 0.5402 | 0.2949 | 182 | 0.5677 |
| 0.5318 | 0.3159 | 195 | 0.5655 |
| 0.5155 | 0.3370 | 208 | 0.5627 |
| 0.5231 | 0.3580 | 221 | 0.5584 |
| 0.528 | 0.3791 | 234 | 0.5578 |
| 0.5372 | 0.4002 | 247 | 0.5545 |
| 0.5145 | 0.4212 | 260 | 0.5517 |
| 0.5246 | 0.4423 | 273 | 0.5487 |
| 0.5299 | 0.4633 | 286 | 0.5473 |
| 0.5297 | 0.4844 | 299 | 0.5445 |
| 0.5089 | 0.5055 | 312 | 0.5425 |
| 0.5208 | 0.5265 | 325 | 0.5409 |
| 0.5114 | 0.5476 | 338 | 0.5398 |
| 0.5092 | 0.5687 | 351 | 0.5384 |
| 0.4886 | 0.5897 | 364 | 0.5359 |
| 0.5121 | 0.6108 | 377 | 0.5337 |
| 0.5079 | 0.6318 | 390 | 0.5324 |
| 0.4996 | 0.6529 | 403 | 0.5310 |
| 0.505 | 0.6740 | 416 | 0.5301 |
| 0.5039 | 0.6950 | 429 | 0.5288 |
| 0.5073 | 0.7161 | 442 | 0.5275 |
| 0.4988 | 0.7371 | 455 | 0.5264 |
| 0.4857 | 0.7582 | 468 | 0.5260 |
| 0.4889 | 0.7793 | 481 | 0.5252 |
| 0.4836 | 0.8003 | 494 | 0.5244 |
| 0.5181 | 0.8214 | 507 | 0.5237 |
| 0.5052 | 0.8424 | 520 | 0.5231 |
| 0.4908 | 0.8635 | 533 | 0.5228 |
| 0.5136 | 0.8846 | 546 | 0.5225 |
| 0.493 | 0.9056 | 559 | 0.5223 |
| 0.4908 | 0.9267 | 572 | 0.5222 |
| 0.5066 | 0.9478 | 585 | 0.5221 |
| 0.5116 | 0.9688 | 598 | 0.5219 |
| 0.5073 | 0.9899 | 611 | 0.5219 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
legraphista/AutoCoder-IMat-GGUF | legraphista | 2024-05-28T18:20:10Z | 371 | 1 | gguf | [
"gguf",
"quantized",
"GGUF",
"imatrix",
"quantization",
"imat",
"static",
"text-generation",
"base_model:Bin12345/AutoCoder",
"base_model:quantized:Bin12345/AutoCoder",
"license:apache-2.0",
"region:us",
"conversational"
] | text-generation | 2024-05-28T15:04:54Z | ---
base_model: Bin12345/AutoCoder
inference: false
library_name: gguf
license: apache-2.0
pipeline_tag: text-generation
quantized_by: legraphista
tags:
- quantized
- GGUF
- imatrix
- quantization
- imat
- imatrix
- static
---
# AutoCoder-IMat-GGUF
_Llama.cpp imatrix quantization of Bin12345/AutoCoder_
Original Model: [Bin12345/AutoCoder](https://huggingface.co/Bin12345/AutoCoder)
Original dtype: `BF16` (`bfloat16`)
Quantized by: llama.cpp [b3010](https://github.com/ggerganov/llama.cpp/releases/tag/b3010)
IMatrix dataset: [here](https://gist.githubusercontent.com/legraphista/d6d93f1a254bcfc58e0af3777eaec41e/raw/d380e7002cea4a51c33fffd47db851942754e7cc/imatrix.calibration.medium.raw)
- [AutoCoder-IMat-GGUF](#autocoder-imat-gguf)
- [Files](#files)
- [IMatrix](#imatrix)
- [Common Quants](#common-quants)
- [All Quants](#all-quants)
- [Downloading using huggingface-cli](#downloading-using-huggingface-cli)
- [Inference](#inference)
- [Simple chat template](#simple-chat-template)
- [Chat template with system prompt](#chat-template-with-system-prompt)
- [Llama.cpp](#llama-cpp)
- [FAQ](#faq)
- [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere)
- [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf)
---
## Files
### IMatrix
Status: β
Available
Link: [here](https://huggingface.co/legraphista/AutoCoder-IMat-GGUF/blob/main/imatrix.dat)
### Common Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [AutoCoder.Q8_0.gguf](https://huggingface.co/legraphista/AutoCoder-IMat-GGUF/blob/main/AutoCoder.Q8_0.gguf) | Q8_0 | 35.43GB | β
Available | βͺ Static | π¦ No
| [AutoCoder.Q6_K.gguf](https://huggingface.co/legraphista/AutoCoder-IMat-GGUF/blob/main/AutoCoder.Q6_K.gguf) | Q6_K | 27.36GB | β
Available | βͺ Static | π¦ No
| [AutoCoder.Q4_K.gguf](https://huggingface.co/legraphista/AutoCoder-IMat-GGUF/blob/main/AutoCoder.Q4_K.gguf) | Q4_K | 19.94GB | β
Available | π’ IMatrix | π¦ No
| [AutoCoder.Q3_K.gguf](https://huggingface.co/legraphista/AutoCoder-IMat-GGUF/blob/main/AutoCoder.Q3_K.gguf) | Q3_K | 16.09GB | β
Available | π’ IMatrix | π¦ No
| [AutoCoder.Q2_K.gguf](https://huggingface.co/legraphista/AutoCoder-IMat-GGUF/blob/main/AutoCoder.Q2_K.gguf) | Q2_K | 12.36GB | β
Available | π’ IMatrix | π¦ No
### All Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [AutoCoder.BF16/*](https://huggingface.co/legraphista/AutoCoder-IMat-GGUF/tree/main/AutoCoder.BF16) | BF16 | 66.69GB | β
Available | βͺ Static | β Yes
| [AutoCoder.FP16/*](https://huggingface.co/legraphista/AutoCoder-IMat-GGUF/tree/main/AutoCoder.FP16) | F16 | 66.69GB | β
Available | βͺ Static | β Yes
| [AutoCoder.Q5_K.gguf](https://huggingface.co/legraphista/AutoCoder-IMat-GGUF/blob/main/AutoCoder.Q5_K.gguf) | Q5_K | 23.54GB | β
Available | βͺ Static | π¦ No
| [AutoCoder.Q5_K_S.gguf](https://huggingface.co/legraphista/AutoCoder-IMat-GGUF/blob/main/AutoCoder.Q5_K_S.gguf) | Q5_K_S | 22.96GB | β
Available | βͺ Static | π¦ No
| [AutoCoder.Q4_K_S.gguf](https://huggingface.co/legraphista/AutoCoder-IMat-GGUF/blob/main/AutoCoder.Q4_K_S.gguf) | Q4_K_S | 18.94GB | β
Available | π’ IMatrix | π¦ No
| [AutoCoder.Q3_K_L.gguf](https://huggingface.co/legraphista/AutoCoder-IMat-GGUF/blob/main/AutoCoder.Q3_K_L.gguf) | Q3_K_L | 17.56GB | β
Available | π’ IMatrix | π¦ No
| [AutoCoder.Q3_K_S.gguf](https://huggingface.co/legraphista/AutoCoder-IMat-GGUF/blob/main/AutoCoder.Q3_K_S.gguf) | Q3_K_S | 14.42GB | β
Available | π’ IMatrix | π¦ No
| [AutoCoder.Q2_K_S.gguf](https://huggingface.co/legraphista/AutoCoder-IMat-GGUF/blob/main/AutoCoder.Q2_K_S.gguf) | Q2_K_S | 11.39GB | β
Available | π’ IMatrix | π¦ No
| [AutoCoder.IQ4_NL.gguf](https://huggingface.co/legraphista/AutoCoder-IMat-GGUF/blob/main/AutoCoder.IQ4_NL.gguf) | IQ4_NL | 18.88GB | β
Available | π’ IMatrix | π¦ No
| [AutoCoder.IQ4_XS.gguf](https://huggingface.co/legraphista/AutoCoder-IMat-GGUF/blob/main/AutoCoder.IQ4_XS.gguf) | IQ4_XS | 17.86GB | β
Available | π’ IMatrix | π¦ No
| [AutoCoder.IQ3_M.gguf](https://huggingface.co/legraphista/AutoCoder-IMat-GGUF/blob/main/AutoCoder.IQ3_M.gguf) | IQ3_M | 15.03GB | β
Available | π’ IMatrix | π¦ No
| [AutoCoder.IQ3_S.gguf](https://huggingface.co/legraphista/AutoCoder-IMat-GGUF/blob/main/AutoCoder.IQ3_S.gguf) | IQ3_S | 14.48GB | β
Available | π’ IMatrix | π¦ No
| [AutoCoder.IQ3_XS.gguf](https://huggingface.co/legraphista/AutoCoder-IMat-GGUF/blob/main/AutoCoder.IQ3_XS.gguf) | IQ3_XS | 13.71GB | β
Available | π’ IMatrix | π¦ No
| [AutoCoder.IQ3_XXS.gguf](https://huggingface.co/legraphista/AutoCoder-IMat-GGUF/blob/main/AutoCoder.IQ3_XXS.gguf) | IQ3_XXS | 12.85GB | β
Available | π’ IMatrix | π¦ No
| [AutoCoder.IQ2_M.gguf](https://huggingface.co/legraphista/AutoCoder-IMat-GGUF/blob/main/AutoCoder.IQ2_M.gguf) | IQ2_M | 11.36GB | β
Available | π’ IMatrix | π¦ No
| [AutoCoder.IQ2_S.gguf](https://huggingface.co/legraphista/AutoCoder-IMat-GGUF/blob/main/AutoCoder.IQ2_S.gguf) | IQ2_S | 10.48GB | β
Available | π’ IMatrix | π¦ No
| [AutoCoder.IQ2_XS.gguf](https://huggingface.co/legraphista/AutoCoder-IMat-GGUF/blob/main/AutoCoder.IQ2_XS.gguf) | IQ2_XS | 9.91GB | β
Available | π’ IMatrix | π¦ No
| [AutoCoder.IQ2_XXS.gguf](https://huggingface.co/legraphista/AutoCoder-IMat-GGUF/blob/main/AutoCoder.IQ2_XXS.gguf) | IQ2_XXS | 8.92GB | β
Available | π’ IMatrix | π¦ No
| [AutoCoder.IQ1_M.gguf](https://huggingface.co/legraphista/AutoCoder-IMat-GGUF/blob/main/AutoCoder.IQ1_M.gguf) | IQ1_M | 7.82GB | β
Available | π’ IMatrix | π¦ No
| [AutoCoder.IQ1_S.gguf](https://huggingface.co/legraphista/AutoCoder-IMat-GGUF/blob/main/AutoCoder.IQ1_S.gguf) | IQ1_S | 7.16GB | β
Available | π’ IMatrix | π¦ No
## Downloading using huggingface-cli
If you do not have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Download the specific file you want:
```
huggingface-cli download legraphista/AutoCoder-IMat-GGUF --include "AutoCoder.Q8_0.gguf" --local-dir ./
```
If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download legraphista/AutoCoder-IMat-GGUF --include "AutoCoder.Q8_0/*" --local-dir ./
# see FAQ for merging GGUF's
```
---
## Inference
### Simple chat template
```
Human: Can you provide ways to eat combinations of bananas and dragonfruits?
Assistant: Sure! Here are some ways to eat bananas and dragonfruits together:
1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey.
2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey.<|EOT|>
Human: What about solving an 2x + 3 = 7 equation?
Assistant:
```
### Chat template with system prompt
```
You are a helpful AI.
Human: Can you provide ways to eat combinations of bananas and dragonfruits?
Assistant: Sure! Here are some ways to eat bananas and dragonfruits together:
1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey.
2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey.<|EOT|>
Human: What about solving an 2x + 3 = 7 equation?
Assistant:
```
### Llama.cpp
```
llama.cpp/main -m AutoCoder.Q8_0.gguf --color -i -p "prompt here (according to the chat template)"
```
---
## FAQ
### Why is the IMatrix not applied everywhere?
According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
### How do I merge a split GGUF?
1. Make sure you have `gguf-split` available
- To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
- Download the appropriate zip for your system from the latest release
- Unzip the archive and you should be able to find `gguf-split`
2. Locate your GGUF chunks folder (ex: `AutoCoder.Q8_0`)
3. Run `gguf-split --merge AutoCoder.Q8_0/AutoCoder.Q8_0-00001-of-XXXXX.gguf AutoCoder.Q8_0.gguf`
- Make sure to point `gguf-split` to the first chunk of the split.
---
Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)! |
yh1306/l1 | yh1306 | 2024-05-28T18:19:15Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-28T18:16:53Z | ---
license: apache-2.0
---
|
chirbard/ppo-Pyramids | chirbard | 2024-05-28T18:17:58Z | 5 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2024-04-28T10:39:23Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog πΆ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: chirbard/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
|
jayashreedevi2020/wav2vec2-large-xls-r-300m-assamese_speech_to_IPA_js | jayashreedevi2020 | 2024-05-28T18:15:46Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_11_0",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-28T17:18:46Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
datasets:
- common_voice_11_0
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-assamese_speech_to_IPA_js
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_11_0
type: common_voice_11_0
config: as
split: test
args: as
metrics:
- name: Wer
type: wer
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-assamese_speech_to_IPA_js
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7838
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:---:|
| 2.4782 | 9.8765 | 400 | 1.6148 | 1.0 |
| 0.69 | 19.7531 | 800 | 0.7838 | 1.0 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
surya-ravindra/calvin_finetuning | surya-ravindra | 2024-05-28T18:11:13Z | 77 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-28T18:04:29Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LucyintheSky/24-5-10_24-5-17-2000-pred1 | LucyintheSky | 2024-05-28T18:08:18Z | 0 | 0 | null | [
"safetensors",
"Image Regression",
"dataset:LucyintheSky/24-5-10_24-5-17-2000",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"region:us"
] | null | 2024-05-28T18:06:44Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- Image Regression
datasets:
- "LucyintheSky/24-5-10_24-5-17-2000"
metrics:
- accuracy
model-index:
- name: "24-5-10_24-5-17-2000-pred1"
results: []
---
# 24-5-10_24-5-17-2000-pred1
## Image Regression Model
This model was trained with [Image Regression Model Trainer](https://github.com/TonyAssi/ImageRegression/tree/main). It takes an image as input and outputs a float value.
```python
from ImageRegression import predict
predict(repo_id='LucyintheSky/24-5-10_24-5-17-2000-pred1',image_path='image.jpg')
```
---
## Dataset
Dataset: LucyintheSky/24-5-10_24-5-17-2000\
Value Column: 'sales_index'\
Train Test Split: 0.2
---
## Training
Base Model: [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224)\
Epochs: 10\
Learning Rate: 0.0001
---
## Usage
### Download
```bash
git clone https://github.com/TonyAssi/ImageRegression.git
cd ImageRegression
```
### Installation
```bash
pip install -r requirements.txt
```
### Import
```python
from ImageRegression import train_model, upload_model, predict
```
### Inference (Prediction)
- **repo_id** π€ repo id of the model
- **image_path** path to image
```python
predict(repo_id='LucyintheSky/24-5-10_24-5-17-2000-pred1',
image_path='image.jpg')
```
The first time this function is called it'll download the safetensor model. Subsequent function calls will run faster.
### Train Model
- **dataset_id** π€ dataset id
- **value_column_name** column name of prediction values in dataset
- **test_split** test split of the train/test split
- **output_dir** the directory where the checkpoints will be saved
- **num_train_epochs** training epochs
- **learning_rate** learning rate
```python
train_model(dataset_id='LucyintheSky/24-5-10_24-5-17-2000',
value_column_name='sales_index',
test_split=0.2,
output_dir='./results',
num_train_epochs=10,
learning_rate=0.0001)
```
The trainer will save the checkpoints in the output_dir location. The model.safetensors are the trained weights you'll use for inference (predicton).
### Upload Model
This function will upload your model to the π€ Hub.
- **model_id** the name of the model id
- **token** go [here](https://huggingface.co/settings/tokens) to create a new π€ token
- **checkpoint_dir** checkpoint folder that will be uploaded
```python
upload_model(model_id='24-5-10_24-5-17-2000-pred1',
token='YOUR_HF_TOKEN',
checkpoint_dir='./results/checkpoint-940')
``` |
chirbard/poca-SoccerTwos | chirbard | 2024-05-28T18:07:45Z | 6 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2024-05-17T07:45:06Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog πΆ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: chirbard/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
|
DiederikMartens/eBERT_sa_cv_13_fold9 | DiederikMartens | 2024-05-28T18:04:33Z | 111 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-28T17:52:49Z | ---
license: apache-2.0
base_model: google-bert/bert-base-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: eBERT_sa_cv_13_fold9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eBERT_sa_cv_13_fold9
This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6852
- F1: 0.5593
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 325 | 0.6179 | 0.4328 |
| 0.6082 | 2.0 | 650 | 0.5883 | 0.4874 |
| 0.6082 | 3.0 | 975 | 0.6852 | 0.5593 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
enithgma/asogrocaima | enithgma | 2024-05-28T17:59:01Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-28T17:59:01Z | ---
license: apache-2.0
---
|
MLP-SEMO/semo_stage1 | MLP-SEMO | 2024-05-28T17:55:06Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2024-05-27T07:17:29Z | ---
{}
---
Trained : Reconstruction tokens
```python
import torch
from safetensors.torch import load_file
from huggingface_hub import hf_hub_download
from semo_lm.model import SemoLlama
from semo_lm.semo_utils.prefix_vars import PAD_TOKEN_ID
model = SemoLlama.from_pretrained(
"meta-llama/Meta-Llama-3-8B-Instruct",
torch_dtype=torch.bfloat16,
pad_token_id=PAD_TOKEN_ID
)
model.init_sentence_encoder_weights()
repo_id = "MLP-SEMO/Llama-Reconstruction-embedding"
filename = "embed_tokens.safetensors"
downloaded_file = hf_hub_download(repo_id=repo_id, filename=filename)
embedding_weights = load_file(downloaded_file)
model.model.embed_tokens.load_state_dict(embedding_weights)
```
|
lukarape/w2v-bert-2.0-acoustic-erebuni-commonvoice-v23-hyper2 | lukarape | 2024-05-28T17:54:27Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-28T17:54:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TopicNavi/Wikipedia-example-topic-model | TopicNavi | 2024-05-28T17:54:15Z | 4 | 0 | bertopic | [
"bertopic",
"text-classification",
"region:us"
] | text-classification | 2024-05-28T17:54:11Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# Wikipedia-example-topic-model
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("TopicNavi/Wikipedia-example-topic-model")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 227
* Number of training documents: 25000
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | of - the - to - and - in | 10 | -1_of_the_to_and |
| 0 | actor - he - award - his - born | 7457 | 0_actor_he_award_his |
| 1 | film - directed - stars - written - by | 1494 | 1_film_directed_stars_written |
| 2 | actress - she - her - award - born | 1487 | 2_actress_she_her_award |
| 3 | series - premiered - created - season - television | 1339 | 3_series_premiered_created_season |
| 4 | band - rock - guitarist - formed - lead | 740 | 4_band_rock_guitarist_formed |
| 5 | species - are - genus - breed - dog | 501 | 5_species_are_genus_breed |
| 6 | indian - hindi - filmfare - cinema - tamil | 428 | 6_indian_hindi_filmfare_cinema |
| 7 | footballer - club - professional - plays - midfielder | 395 | 7_footballer_club_professional_plays |
| 8 | king - queen - prince - duke - throne | 372 | 8_king_queen_prince_duke |
| 9 | symptoms - disease - may - disorder - pain | 310 | 9_symptoms_disease_may_disorder |
| 10 | war - battle - fought - empire - german | 299 | 10_war_battle_fought_empire |
| 11 | sexual - sex - or - gender - activity | 284 | 11_sexual_sex_or_gender |
| 12 | singer - songwriter - album - music - albums | 268 | 12_singer_songwriter_album_music |
| 13 | language - spoken - languages - ethnic - speakers | 262 | 13_language_spoken_languages_ethnic |
| 14 | company - multinational - headquartered - corporation - technology | 204 | 14_company_multinational_headquartered_corporation |
| 15 | species - plant - genus - fruit - plants | 203 | 15_species_plant_genus_fruit |
| 16 | poet - philosopher - his - writer - novelist | 197 | 16_poet_philosopher_his_writer |
| 17 | aircraft - boeing - fighter - air - designed | 185 | 17_aircraft_boeing_fighter_air |
| 18 | game - xbox - playstation - developed - windows | 183 | 18_game_xbox_playstation_developed |
| 19 | city - capital - population - area - largest | 175 | 19_city_capital_population_area |
| 20 | manga - anime - aired - adaptation - japanese | 164 | 20_manga_anime_aired_adaptation |
| 21 | hindilanguage - indian - stars - film - produced | 156 | 21_hindilanguage_indian_stars_film |
| 22 | bible - jesus - god - hebrew - testament | 156 | 22_bible_jesus_god_hebrew |
| 23 | mathematics - probability - function - distribution - numbers | 151 | 23_mathematics_probability_function_distribution |
| 24 | nba - basketball - player - association - allstar | 150 | 24_nba_basketball_player_association |
| 25 | killer - convicted - serial - murders - murder | 148 | 25_killer_convicted_serial_murders |
| 26 | rapper - album - records - released - professionally | 141 | 26_rapper_album_records_released |
| 27 | wrestling - wwe - wrestler - ring - professional | 140 | 27_wrestling_wwe_wrestler_ring |
| 28 | forces - armed - military - force - air | 133 | 28_forces_armed_military_force |
| 29 | toyota - car - honda - manufactured - model | 124 | 29_toyota_car_honda_manufactured |
| 30 | nfl - football - quarterback - college - played | 116 | 30_nfl_football_quarterback_college |
| 31 | greek - mythology - goddess - ancient - roman | 115 | 31_greek_mythology_goddess_ancient |
| 32 | disney - walt - entertainment - studios - company | 110 | 32_disney_walt_entertainment_studios |
| 33 | team - compete - division - conference - league | 106 | 33_team_compete_division_conference |
| 34 | medication - treat - used - mouth - taken | 105 | 34_medication_treat_used_mouth |
| 35 | political - social - economic - democracy - government | 100 | 35_political_social_economic_democracy |
| 36 | football - club - league - bundesliga - professional | 96 | 36_football_club_league_bundesliga |
| 37 | dish - sauce - cheese - meat - vegetables | 92 | 37_dish_sauce_cheese_meat |
| 38 | element - chemical - atomic - symbol - metal | 92 | 38_element_chemical_atomic_symbol |
| 39 | mind - psychology - or - that - philosophical | 91 | 39_mind_psychology_or_that |
| 40 | novel - published - author - story - book | 89 | 40_novel_published_author_story |
| 41 | rifle - cartridge - pistol - gun - sig | 85 | 41_rifle_cartridge_pistol_gun |
| 42 | cup - fifa - tournament - world - teams | 84 | 42_cup_fifa_tournament_world |
| 43 | cofounder - ceo - entrepreneur - investor - facebook | 82 | 43_cofounder_ceo_entrepreneur_investor |
| 44 | computer - programming - data - software - language | 82 | 44_computer_programming_data_software |
| 45 | marvel - comics - comic - character - books | 81 | 45_marvel_comics_comic_character |
| 46 | ufc - mixed - martial - fighting - champion | 77 | 46_ufc_mixed_martial_fighting |
| 47 | korean - south - kim - roles - my | 76 | 47_korean_south_kim_roles |
| 48 | korean - south - entertainment - group - girl | 75 | 48_korean_south_entertainment_group |
| 49 | president - served - vice - states - bush | 74 | 49_president_served_vice_states |
| 50 | mafia - crime - cartel - organized - drug | 73 | 50_mafia_crime_cartel_organized |
| 51 | islands - island - australia - ocean - pacific | 73 | 51_islands_island_australia_ocean |
| 52 | state - india - pradesh - capital - region | 72 | 52_state_india_pradesh_capital |
| 53 | politician - president - served - minister - since | 69 | 53_politician_president_served_minister |
| 54 | city - county - populous - metropolitan - population | 69 | 54_city_county_populous_metropolitan |
| 55 | africa - country - republic - officially - west | 69 | 55_africa_country_republic_officially |
| 56 | university - research - college - private - universities | 66 | 56_university_research_college_private |
| 57 | ceremony - presented - awards - academy - ampas | 65 | 57_ceremony_presented_awards_academy |
| 58 | tennis - open - titles - singles - atp | 64 | 58_tennis_open_titles_singles |
| 59 | korean - kim - kst - aired - south | 64 | 59_korean_kim_kst_aired |
| 60 | music - rock - genre - pop - punk | 63 | 60_music_rock_genre_pop |
| 61 | caribbean - islands - island - country - antilles | 61 | 61_caribbean_islands_island_country |
| 62 | politician - senator - republican - democratic - party | 60 | 62_politician_senator_republican_democratic |
| 63 | electric - electromagnetic - radiation - energy - magnetic | 57 | 63_electric_electromagnetic_radiation_energy |
| 64 | wars - star - jedi - skywalker - trilogy | 55 | 64_wars_star_jedi_skywalker |
| 65 | planet - solar - sun - earth - jupiter | 54 | 65_planet_solar_sun_earth |
| 66 | class - ship - navy - ships - submarines | 54 | 66_class_ship_navy_ships |
| 67 | president - sabha - house - government - chief | 53 | 67_president_sabha_house_government |
| 68 | alphabet - letter - alphabets - languages - english | 49 | 68_alphabet_letter_alphabets_languages |
| 69 | football - team - represents - mens - governing | 48 | 69_football_team_represents_mens |
| 70 | club - football - stadium - league - tier | 48 | 70_club_football_stadium_league |
| 71 | empire - ancient - egypt - bc - civilization | 45 | 71_empire_ancient_egypt_bc |
| 72 | manufacturer - automobile - automotive - stellantis - company | 45 | 72_manufacturer_automobile_automotive_stellantis |
| 73 | flag - flags - national - tricolour - anthem | 44 | 73_flag_flags_national_tricolour |
| 74 | church - religious - christianity - religion - movement | 43 | 74_church_religious_christianity_religion |
| 75 | minister - prime - conservative - mp - served | 42 | 75_minister_prime_conservative_mp |
| 76 | wine - drink - sugar - alcoholic - cocktail | 41 | 76_wine_drink_sugar_alcoholic |
| 77 | hindu - hinduism - shiva - vishnu - goddess | 41 | 77_hindu_hinduism_shiva_vishnu |
| 78 | batman - dc - comics - gotham - superhero | 41 | 78_batman_dc_comics_gotham |
| 79 | formula - racing - driver - prix - championship | 41 | 79_formula_racing_driver_prix |
| 80 | airline - airlines - airport - carrier - destinations | 41 | 80_airline_airlines_airport_carrier |
| 81 | compound - acid - organic - chemical - formula | 40 | 81_compound_acid_organic_chemical |
| 82 | nazi - german - hitler - adolf - germany | 40 | 82_nazi_german_hitler_adolf |
| 83 | bond - james - eon - spy - mi6 | 39 | 83_bond_james_eon_spy |
| 84 | belief - god - religious - existence - atheism | 39 | 84_belief_god_religious_existence |
| 85 | energy - constant - force - heat - unit | 39 | 85_energy_constant_force_heat |
| 86 | minister - prime - indian - pakistan - india | 39 | 86_minister_prime_indian_pakistan |
| 87 | roman - emperor - bc - augustus - caesar | 38 | 87_roman_emperor_bc_augustus |
| 88 | asia - gulf - sea - east - oman | 38 | 88_asia_gulf_sea_east |
| 89 | boxer - heavyweight - title - wba - ibf | 37 | 89_boxer_heavyweight_title_wba |
| 90 | county - england - city - ceremonial - london | 36 | 90_county_england_city_ceremonial |
| 91 | data - learning - algorithm - machine - neural | 36 | 91_data_learning_algorithm_machine |
| 92 | day - holiday - celebrated - thanksgiving - celebration | 35 | 92_day_holiday_celebrated_thanksgiving |
| 93 | saul - breaking - bad - call - better | 34 | 93_saul_breaking_bad_call |
| 94 | punishment - death - execution - homicide - suicide | 34 | 94_punishment_death_execution_homicide |
| 95 | degree - education - secondary - bachelor - bachelors | 34 | 95_degree_education_secondary_bachelor |
| 96 | console - nintendo - playstation - game - consoles | 34 | 96_console_nintendo_playstation_game |
| 97 | iphone - apple - ipad - pro - inc | 34 | 97_iphone_apple_ipad_pro |
| 98 | vitamin - organisms - bacteria - animals - plants | 33 | 98_vitamin_organisms_bacteria_animals |
| 99 | cells - blood - system - gland - organ | 33 | 99_cells_blood_system_gland |
| 100 | trek - star - kirk - starship - uss | 33 | 100_trek_star_kirk_starship |
| 101 | jews - nazi - camps - camp - extermination | 33 | 101_jews_nazi_camps_camp |
| 102 | space - moon - apollo - nasa - shuttle | 33 | 102_space_moon_apollo_nasa |
| 103 | roman - empire - rome - western - byzantine | 32 | 103_roman_empire_rome_western |
| 104 | marvel - studios - mcu - thor - superhero | 32 | 104_marvel_studios_mcu_thor |
| 105 | organisms - biology - genetic - genes - species | 32 | 105_organisms_biology_genetic_genes |
| 106 | fashion - gucci - designer - luxury - chanel | 32 | 106_fashion_gucci_designer_luxury |
| 107 | baseball - mlb - league - major - runs | 32 | 107_baseball_mlb_league_major |
| 108 | island - islands - ireland - isles - northern | 31 | 108_island_islands_ireland_isles |
| 109 | creature - folklore - legendary - depicted - or | 31 | 109_creature_folklore_legendary_depicted |
| 110 | empire - mughal - maratha - subcontinent - dynasty | 31 | 110_empire_mughal_maratha_subcontinent |
| 111 | social - racial - race - racism - white | 31 | 111_social_racial_race_racism |
| 112 | election - presidential - incumbent - tuesday - republican | 30 | 112_election_presidential_incumbent_tuesday |
| 113 | building - tallest - street - manhattan - york | 29 | 113_building_tallest_street_manhattan |
| 114 | bowl - super - champion - football - conference | 29 | 114_bowl_super_champion_football |
| 115 | election - elections - elect - held - general | 29 | 115_election_elections_elect_held |
| 116 | soviet - union - stalin - communist - russian | 29 | 116_soviet_union_stalin_communist |
| 117 | stock - exchange - securities - investment - companies | 29 | 117_stock_exchange_securities_investment |
| 118 | bmw - mercedesbenz - generation - sedan - marketed | 29 | 118_bmw_mercedesbenz_generation_sedan |
| 119 | currency - dollar - currencies - monetary - bank | 29 | 119_currency_dollar_currencies_monetary |
| 120 | dynasty - emperor - china - qin - chinese | 28 | 120_dynasty_emperor_china_qin |
| 121 | internet - protocol - ip - networks - network | 28 | 121_internet_protocol_ip_networks |
| 122 | tropical - cyclones - cyclone - hurricane - hemisphere | 28 | 122_tropical_cyclones_cyclone_hurricane |
| 123 | anthropomorphic - cartoon - character - peanuts - bugs | 28 | 123_anthropomorphic_cartoon_character_peanuts |
| 124 | elections - election - senate - elect - governor | 28 | 124_elections_election_senate_elect |
| 125 | windows - operating - microsoft - macos - server | 28 | 125_windows_operating_microsoft_macos |
| 126 | san - county - california - los - angeles | 27 | 126_san_county_california_los |
| 127 | potter - harry - hogwarts - rowling - rowlings | 27 | 127_potter_harry_hogwarts_rowling |
| 128 | tank - soviet - tanks - t72 - armoured | 27 | 128_tank_soviet_tanks_t72 |
| 129 | website - youtube - pornographic - videos - websites | 26 | 129_website_youtube_pornographic_videos |
| 130 | missile - missiles - surfacetoair - ballistic - system | 26 | 130_missile_missiles_surfacetoair_ballistic |
| 131 | formula - championship - fia - racing - drivers | 26 | 131_formula_championship_fia_racing |
| 132 | mario - game - nintendo - super - games | 26 | 132_mario_game_nintendo_super |
| 133 | composer - composers - symphony - music - pianist | 26 | 133_composer_composers_symphony_music |
| 134 | music - theatre - musical - art - or | 26 | 134_music_theatre_musical_art |
| 135 | party - political - democratic - liberal - labour | 25 | 135_party_political_democratic_liberal |
| 136 | province - canada - provinces - territories - city | 25 | 136_province_canada_provinces_territories |
| 137 | airport - busiest - international - passenger - traffic | 25 | 137_airport_busiest_international_passenger |
| 138 | china - shanghai - province - guangzhou - populous | 24 | 138_china_shanghai_province_guangzhou |
| 139 | flight - airport - airlines - accident - crashed | 24 | 139_flight_airport_airlines_accident |
| 140 | expedition - spanish - america - explorer - americas | 24 | 140_expedition_spanish_america_explorer |
| 141 | economy - gdp - capita - ppp - countries | 24 | 141_economy_gdp_capita_ppp |
| 142 | ball - sport - players - teams - team | 24 | 142_ball_sport_players_teams |
| 143 | thrones - fire - ice - hbo - fantasy | 23 | 143_thrones_fire_ice_hbo |
| 144 | uefa - champions - league - cup - organised | 23 | 144_uefa_champions_league_cup |
| 145 | terminator - transformers - fiction - science - action | 23 | 145_terminator_transformers_fiction_science |
| 146 | time - calendar - zone - year - daylight | 23 | 146_time_calendar_zone_year |
| 147 | caliphate - muhammad - ibn - islam - islamic | 22 | 147_caliphate_muhammad_ibn_islam |
| 148 | holmes - sherlock - dracula - conan - watson | 22 | 148_holmes_sherlock_dracula_conan |
| 149 | games - multisport - olympic - olympics - winter | 21 | 149_games_multisport_olympic_olympics |
| 150 | web - google - search - pages - users | 21 | 150_web_google_search_pages |
| 151 | google - chat - messaging - users - torrent | 21 | 151_google_chat_messaging_users |
| 152 | renaissance - italian - leonardo - michelangelo - vinci | 21 | 152_renaissance_italian_leonardo_michelangelo |
| 153 | amendment - court - constitution - rights - abortion | 21 | 153_amendment_court_constitution_rights |
| 154 | marvel - continuity - comics - mcu - cinematic | 21 | 154_marvel_continuity_comics_mcu |
| 155 | draft - players - nba - lottery - eligible | 20 | 155_draft_players_nba_lottery |
| 156 | kennedy - clinton - president - jacqueline - lewinsky | 20 | 156_kennedy_clinton_president_jacqueline |
| 157 | shooting - school - injured - killed - mass | 20 | 157_shooting_school_injured_killed |
| 158 | greys - anatomy - abc - medical - rhimes | 20 | 158_greys_anatomy_abc_medical |
| 159 | kardashian - kardashians - jenner - keeping - kourtney | 19 | 159_kardashian_kardashians_jenner_keeping |
| 160 | godfather - corleone - coppola - vito - pacino | 19 | 160_godfather_corleone_coppola_vito |
| 161 | script - alphabet - chinese - writing - write | 19 | 161_script_alphabet_chinese_writing |
| 162 | beatles - album - parlophone - studio - songs | 19 | 162_beatles_album_parlophone_studio |
| 163 | martial - boxing - combat - aikido - wrestling | 18 | 163_martial_boxing_combat_aikido |
| 164 | york - island - new - borough - county | 18 | 164_york_island_new_borough |
| 165 | court - supreme - justice - associate - jurist | 18 | 165_court_supreme_justice_associate |
| 166 | hamlet - shakespeare - shakespeares - tragedy - william | 18 | 166_hamlet_shakespeare_shakespeares_tragedy |
| 167 | hong - kong - martial - yen - chow | 18 | 167_hong_kong_martial_yen |
| 168 | rocky - stallone - rambo - sylvester - balboa | 18 | 168_rocky_stallone_rambo_sylvester |
| 169 | nobel - prize - physics - prizes - physicist | 18 | 169_nobel_prize_physics_prizes |
| 170 | thrones - hbo - game - 20112019 - fantasy | 18 | 170_thrones_hbo_game_20112019 |
| 171 | cricket - cricketer - indian - captain - righthanded | 17 | 171_cricket_cricketer_indian_captain |
| 172 | art - architecture - movement - style - baroque | 17 | 172_art_architecture_movement_style |
| 173 | nuclear - bomb - weapons - weapon - thermonuclear | 17 | 173_nuclear_bomb_weapons_weapon |
| 174 | amphetamine - enhancer - stimulant - drug - adhd | 16 | 174_amphetamine_enhancer_stimulant_drug |
| 175 | walking - dead - kirkman - adlard - amc | 16 | 175_walking_dead_kirkman_adlard |
| 176 | snuff - genre - comedy - laughter - films | 16 | 176_snuff_genre_comedy_laughter |
| 177 | superman - dc - aquaman - dceu - warner | 16 | 177_superman_dc_aquaman_dceu |
| 178 | health - care - medical - medicine - hospitals | 16 | 178_health_care_medical_medicine |
| 179 | color - colors - rgb - red - blue | 16 | 179_color_colors_rgb_red |
| 180 | smiley - bokeh - clothing - meme - face | 16 | 180_smiley_bokeh_clothing_meme |
| 181 | metallica - metal - band - ulrich - thrash | 15 | 181_metallica_metal_band_ulrich |
| 182 | economic - prices - inflation - price - crisis | 15 | 182_economic_prices_inflation_price |
| 183 | doctor - incarnation - thirteenth - bbc - specials | 15 | 183_doctor_incarnation_thirteenth_bbc |
| 184 | rings - tolkiens - tolkien - hobbit - lord | 15 | 184_rings_tolkiens_tolkien_hobbit |
| 185 | pope - church - vatican - catholic - roncalli | 15 | 185_pope_church_vatican_catholic |
| 186 | rockefeller - miss - oil - rothschild - family | 15 | 186_rockefeller_miss_oil_rothschild |
| 187 | seinfeld - comedian - sitcom - kramer - jerry | 14 | 187_seinfeld_comedian_sitcom_kramer |
| 188 | ottoman - sultan - selim - empire - erturul | 14 | 188_ottoman_sultan_selim_empire |
| 189 | chinese - china - ccp - mao - communist | 14 | 189_chinese_china_ccp_mao |
| 190 | philosopher - philosophy - greek - treatise - mathematician | 14 | 190_philosopher_philosophy_greek_treatise |
| 191 | mark - punctuation - exclamation - bracket - marks | 14 | 191_mark_punctuation_exclamation_bracket |
| 192 | event - wrestlemania - wwe - payperview - livestreaming | 14 | 192_event_wrestlemania_wwe_payperview |
| 193 | norse - mythology - old - loki - odin | 14 | 193_norse_mythology_old_loki |
| 194 | dre - hop - hip - wutang - group | 14 | 194_dre_hop_hip_wutang |
| 195 | newspaper - daily - guardian - times - news | 14 | 195_newspaper_daily_guardian_times |
| 196 | theft - kratos - auto - rockstar - god | 13 | 196_theft_kratos_auto_rockstar |
| 197 | drag - rupauls - race - vh1 - season | 13 | 197_drag_rupauls_race_vh1 |
| 198 | polyethylene - polymers - silk - plastics - synthetic | 13 | 198_polyethylene_polymers_silk_plastics |
| 199 | strings - instrument - instruments - guitar - electronic | 13 | 199_strings_instrument_instruments_guitar |
| 200 | population - census - rate - growth - increase | 13 | 200_population_census_rate_growth |
| 201 | resolution - hdtv - display - hd - pixels | 13 | 201_resolution_hdtv_display_hd |
| 202 | athletic - hockey - ncaa - conference - university | 13 | 202_athletic_hockey_ncaa_conference |
| 203 | nervous - spinal - brain - nerves - cord | 13 | 203_nervous_spinal_brain_nerves |
| 204 | peppers - chili - hot - rock - red | 13 | 204_peppers_chili_hot_rock |
| 205 | accounting - tax - financial - nonprofit - entity | 12 | 205_accounting_tax_financial_nonprofit |
| 206 | swift - album - studio - taylor - singersongwriter | 12 | 206_swift_album_studio_taylor |
| 207 | sheldon - bang - theory - big - parsons | 12 | 207_sheldon_bang_theory_big |
| 208 | conjuring - wan - annabelle - lorraine - dauberman | 12 | 208_conjuring_wan_annabelle_lorraine |
| 209 | karate - kid - miyagi - macchio - kai | 12 | 209_karate_kid_miyagi_macchio |
| 210 | earthquake - eruption - tsunami - fault - occurred | 12 | 210_earthquake_eruption_tsunami_fault |
| 211 | guard - guards - ball - positions - midfielders | 12 | 211_guard_guards_ball_positions |
| 212 | geologic - planets - earth - how - earths | 12 | 212_geologic_planets_earth_how |
| 213 | card - game - cards - chess - baccarat | 12 | 213_card_game_cards_chess |
| 214 | zodiac - sign - astrological - transits - spans | 12 | 214_zodiac_sign_astrological_transits |
| 215 | gandhi - singh - godse - india - bhindranwale | 12 | 215_gandhi_singh_godse_india |
| 216 | cannabis - cigarette - thc - tobacco - cocaine | 11 | 216_cannabis_cigarette_thc_tobacco |
| 217 | xmen - wolverine - installment - jackman - superhero | 11 | 217_xmen_wolverine_installment_jackman |
| 218 | caucasus - azerbaijan - baku - sea - caspian | 11 | 218_caucasus_azerbaijan_baku_sea |
| 219 | draft - nfl - meeting - select - eligible | 11 | 219_draft_nfl_meeting_select |
| 220 | nobility - royalty - rank - knighthood - dukes | 11 | 220_nobility_royalty_rank_knighthood |
| 221 | saudi - arabia - saud - abdulaziz - bin | 11 | 221_saudi_arabia_saud_abdulaziz |
| 222 | jolyne - jotaro - her - school - stand | 11 | 222_jolyne_jotaro_her_school |
| 223 | prefecture - kon - mifune - ueno - hachik | 11 | 223_prefecture_kon_mifune_ueno |
| 224 | guru - granth - baba - gobind - das | 10 | 224_guru_granth_baba_gobind |
| 225 | un - nations - intergovernmental - organisation - organization | 10 | 225_un_nations_intergovernmental_organisation |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: english
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
* zeroshot_min_similarity: 0.7
* zeroshot_topic_list: None
## Framework versions
* Numpy: 1.26.4
* HDBSCAN: 0.8.33
* UMAP: 0.5.6
* Pandas: 2.2.2
* Scikit-Learn: 1.4.2
* Sentence-transformers: 2.7.0
* Transformers: 4.40.2
* Numba: 0.59.1
* Plotly: 5.22.0
* Python: 3.11.9
|
doubledsbv/KafkaLM-Mixtral-8x7B-V0.2_DPO-AWQ | doubledsbv | 2024-05-28T17:53:24Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] | text-generation | 2024-05-28T17:47:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DiederikMartens/eBERT_sa_cv_13_fold8 | DiederikMartens | 2024-05-28T17:52:42Z | 109 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-28T16:27:46Z | ---
license: apache-2.0
base_model: google-bert/bert-base-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: eBERT_sa_cv_13_fold8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eBERT_sa_cv_13_fold8
This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5854
- F1: 0.5584
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 325 | 0.5765 | 0.4529 |
| 0.6339 | 2.0 | 650 | 0.5104 | 0.5005 |
| 0.6339 | 3.0 | 975 | 0.5854 | 0.5584 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
yqelz/wsd-rubert-cased | yqelz | 2024-05-28T17:49:09Z | 110 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-03-27T08:52:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
Resolve WSD problem based on CoBaLD Rus
## Model Details
### Model Description
- **Developed by:** Sergey Biryukov
- **Model type:** WSD
- **Language(s) (NLP):** Russian
- **Finetuned from model [optional]:** rubert-base-cased
|
zakaria99/Gptmodel | zakaria99 | 2024-05-28T17:39:49Z | 141 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-28T17:39:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
intone/unaligned-llama3-8b-v0.1-16bit | intone | 2024-05-28T17:39:36Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-23T16:21:36Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- llama
- trl
---
unaligned llama 8b, 16bit.
<br> Not DPOd, just SFT trained. Horrific model (THIS IS A TEXT GENERATION MODEL) |
datek/Qwen-Qwen1.5-1.8B-1716917636 | datek | 2024-05-28T17:36:00Z | 138 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-28T17:33:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
QuietImpostor/Llama-3-Refueled-Pruned | QuietImpostor | 2024-05-28T17:31:08Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"en",
"dataset:yahma/alpaca-cleaned",
"base_model:refuelai/Llama-3-Refueled",
"base_model:finetune:refuelai/Llama-3-Refueled",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-21T19:26:24Z | ---
base_model:
- refuelai/Llama-3-Refueled
library_name: transformers
tags:
- mergekit
- merge
license: llama3
datasets:
- yahma/alpaca-cleaned
language:
- en
---
### Pruning Details
This is a prune of [Llama 3 Refueled](https://www.huggingface.co/refuelai/llama-3-refueled) using [mergekit](https://github.com/cg123/mergekit) and [PruneMe](https://www.github.com/arcee-ai/PruneMe)
The model is semi-tested, but still needs some debugging, namely with converting to GGUF, though I am working on that.
Note: the [dataset](https://www.huggingface.co/yahma/alpaca-cleaned) was used for evaluating what layers should be pruned. This model was **NOT** finetuned.
### Performance
After only 1 test because of lack of compute and for stupid long inference times on my 3060ti (8GB), it does show some interesting results.
Here's the response after being prompted "Hi!" using the [example from Meta](https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3).
```model_response
vel tips and recommendations.user
Hi!assistant
Hi! I can help you find the best travel tips and recommendations for your next trip. Where you most interested to travel and what kind of activities you most to to the 9e sure, we can start and letiing 10e 11e 12e 13e 14e 15e 16e 17e 18e 19e 20e 21e 23e 24e 5e 6e 7e 8e 9e 10e 11e 12e 13e 14e 15e
```
Even without finetuning, the model still exhibits some extent of instruction following.
And fine-tuning is a WIP and I will update this when it's ready.
Finetuning is no longer in progress due to issues with unsloth. However, I am working on a project that will hopefully make pruning models easier.
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: refuelai/Llama-3-Refueled
layer_range: [0, 19]
- sources:
- model: refuelai/Llama-3-Refueled
layer_range: [29, 32]
merge_method: passthrough
dtype: bfloat16
``` |
DiederikMartens/tsBERT_sa_cv_13_fold9 | DiederikMartens | 2024-05-28T17:29:56Z | 109 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:igorsterner/german-english-code-switching-bert",
"base_model:finetune:igorsterner/german-english-code-switching-bert",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-28T17:07:02Z | ---
license: mit
base_model: igorsterner/german-english-code-switching-bert
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: tsBERT_sa_cv_13_fold9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tsBERT_sa_cv_13_fold9
This model is a fine-tuned version of [igorsterner/german-english-code-switching-bert](https://huggingface.co/igorsterner/german-english-code-switching-bert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6597
- F1: 0.6462
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 325 | 0.4657 | 0.6034 |
| 0.4337 | 2.0 | 650 | 0.4886 | 0.5960 |
| 0.4337 | 3.0 | 975 | 0.6597 | 0.6462 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
DiederikMartens/mBERT_sa_cv_13_fold9 | DiederikMartens | 2024-05-28T17:29:47Z | 115 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-28T17:06:58Z | ---
license: apache-2.0
base_model: google-bert/bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: mBERT_sa_cv_13_fold9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mBERT_sa_cv_13_fold9
This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5084
- F1: 0.5983
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 325 | 0.5642 | 0.4782 |
| 0.5411 | 2.0 | 650 | 0.5084 | 0.5983 |
| 0.5411 | 3.0 | 975 | 0.6772 | 0.5917 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
cs552-mlp/phi3-dpo | cs552-mlp | 2024-05-28T17:28:04Z | 2 | 0 | peft | [
"peft",
"safetensors",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:adapter:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"region:us"
] | null | 2024-05-28T17:08:06Z | ---
library_name: peft
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Model Card for Model ID
DPO finetuned version of `phi3-instruct-4k` on student annotated preference data focusing on
course content questions from EPFL curriculum (physics, math, cs). |
nisar2424/Nisar__ | nisar2424 | 2024-05-28T17:23:47Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2024-05-28T17:23:47Z | ---
license: other
license_name: other
license_link: LICENSE
---
|
dwb2023/paligemma_rlaifv-V-1 | dwb2023 | 2024-05-28T17:23:21Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"paligemma",
"generated_from_trainer",
"base_model:google/paligemma-3b-pt-224",
"base_model:adapter:google/paligemma-3b-pt-224",
"license:gemma",
"region:us"
] | null | 2024-05-28T05:30:29Z | ---
license: gemma
library_name: peft
tags:
- generated_from_trainer
base_model: google/paligemma-3b-pt-224
model-index:
- name: paligemma_rlaifv-V-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# paligemma_rlaifv-V-1
This model is a fine-tuned version of [google/paligemma-3b-pt-224](https://huggingface.co/google/paligemma-3b-pt-224) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 8
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.42.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
MrezaPRZ/codellama_synthetic_gretel_bigquery | MrezaPRZ | 2024-05-28T17:23:09Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-28T17:20:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
amosp5/llama3-8b-instruct-scrum | amosp5 | 2024-05-28T17:21:11Z | 3 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2024-05-28T17:15:03Z | ---
license: llama3
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: meta-llama/Meta-Llama-3-8B-Instruct
datasets:
- generator
model-index:
- name: llama3-8b-instruct-scrum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3-8b-instruct-scrum
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.38.2
- Pytorch 2.3.0a0+40ec155e58.nv24.03
- Datasets 2.19.1
- Tokenizers 0.15.2 |
vuongnhathien/test-wrong-label | vuongnhathien | 2024-05-28T17:14:29Z | 192 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"convnextv2",
"image-classification",
"generated_from_trainer",
"base_model:facebook/convnextv2-base-22k-384",
"base_model:finetune:facebook/convnextv2-base-22k-384",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-28T17:06:36Z | ---
license: apache-2.0
base_model: facebook/convnextv2-base-22k-384
tags:
- generated_from_trainer
model-index:
- name: test-wrong-label
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-wrong-label
This model is a fine-tuned version of [facebook/convnextv2-base-22k-384](https://huggingface.co/facebook/convnextv2-base-22k-384) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 0.9315 | 0.7625 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
hchcsuim/batch-size-16_FFPP-Raw_1FPS_faces-expand-0-aligned_unaugmentation | hchcsuim | 2024-05-28T17:13:29Z | 215 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-28T17:00:08Z | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: batch-size-16_FFPP-Raw_1FPS_faces-expand-0-aligned_unaugmentation
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9837432499886555
- name: Precision
type: precision
value: 0.9830542407298831
- name: Recall
type: recall
value: 0.9964053803339518
- name: F1
type: f1
value: 0.9896847848777363
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# batch-size-16_FFPP-Raw_1FPS_faces-expand-0-aligned_unaugmentation
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0442
- Accuracy: 0.9837
- Precision: 0.9831
- Recall: 0.9964
- F1: 0.9897
- Roc Auc: 0.9991
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 0.0483 | 1.0 | 1377 | 0.0442 | 0.9837 | 0.9831 | 0.9964 | 0.9897 | 0.9991 |
### Framework versions
- Transformers 4.39.2
- Pytorch 2.3.0
- Datasets 2.18.0
- Tokenizers 0.15.2
|
malerbe/q-FrozenLake-v1-4x4-noSlippery | malerbe | 2024-05-28T17:11:50Z | 0 | 0 | null | [
"FrozenLake-v1-8x8-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-24T09:57:19Z | ---
tags:
- FrozenLake-v1-8x8-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8-no_slippery
type: FrozenLake-v1-8x8-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
model = load_from_hub(repo_id="malerbe/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
momina296/flan-t5-base-imdb-text-classification | momina296 | 2024-05-28T17:09:18Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-11T16:54:00Z | ---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: flan-t5-base-imdb-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-imdb-text-classification
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5742
- F1: 54.5455
- Gen Len: 2.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2
|
Shiv1143/corgy_dog_LoRA | Shiv1143 | 2024-05-28T17:09:03Z | 1 | 1 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-28T16:52:17Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of TOK dog
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - Shiv1143/corgy_dog_LoRA
<Gallery />
## Model description
These are Shiv1143/corgy_dog_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](Shiv1143/corgy_dog_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
aknaraya/summarization_fine_tune_bbc_summary | aknaraya | 2024-05-28T17:08:38Z | 10 | 0 | transformers | [
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-28T09:52:46Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_keras_callback
model-index:
- name: aknaraya/summarization_fine_tune_bbc_summary
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# aknaraya/summarization_fine_tune_bbc_summary
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5873
- Validation Loss: 0.3274
- Train Lr: 2e-05
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Lr | Epoch |
|:----------:|:---------------:|:--------:|:-----:|
| 0.7762 | 0.4048 | 2e-05 | 0 |
| 0.7113 | 0.3899 | 2e-05 | 1 |
| 0.6596 | 0.3765 | 2e-05 | 2 |
| 0.6524 | 0.3654 | 2e-05 | 3 |
| 0.6652 | 0.3553 | 2e-05 | 4 |
| 0.6315 | 0.3476 | 2e-05 | 5 |
| 0.5763 | 0.3411 | 2e-05 | 6 |
| 0.5952 | 0.3358 | 2e-05 | 7 |
| 0.5940 | 0.3309 | 2e-05 | 8 |
| 0.5873 | 0.3274 | 2e-05 | 9 |
### Framework versions
- Transformers 4.41.0
- TensorFlow 2.15.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
DiederikMartens/tsBERT_sa_cv_13_fold8 | DiederikMartens | 2024-05-28T17:06:54Z | 109 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:igorsterner/german-english-code-switching-bert",
"base_model:finetune:igorsterner/german-english-code-switching-bert",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-28T16:23:17Z | ---
license: mit
base_model: igorsterner/german-english-code-switching-bert
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: tsBERT_sa_cv_13_fold8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tsBERT_sa_cv_13_fold8
This model is a fine-tuned version of [igorsterner/german-english-code-switching-bert](https://huggingface.co/igorsterner/german-english-code-switching-bert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5081
- F1: 0.6678
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 325 | 0.4193 | 0.6050 |
| 0.45 | 2.0 | 650 | 0.4256 | 0.6563 |
| 0.45 | 3.0 | 975 | 0.5081 | 0.6678 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
bellge/cw3_trained_model_smaller | bellge | 2024-05-28T16:57:00Z | 120 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-28T16:56:17Z | ---
license: mit
base_model: roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: cw3_trained_model_smaller
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cw3_trained_model_smaller
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7497
- Accuracy: 0.7379
- F1: 0.7372
- Precision: 0.7388
- Recall: 0.7379
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.5662 | 3.11 | 500 | 0.7752 | 0.6552 | 0.6330 | 0.7420 | 0.6552 |
| 0.2541 | 6.21 | 1000 | 0.7497 | 0.7379 | 0.7372 | 0.7388 | 0.7379 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Fawazzx/Saul-semantic.v3 | Fawazzx | 2024-05-28T16:54:31Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-28T08:28:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Milad1b/Clinical_BERT_CL_DRugcomb_FT | Milad1b | 2024-05-28T16:53:37Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-22T20:10:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Essacheez/gemma-7b-it-finetune-code-10k-gemma-style | Essacheez | 2024-05-28T16:50:15Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-28T15:34:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
straenyagun/akilvedavranisbozukluklari-classification | straenyagun | 2024-05-28T16:48:58Z | 111 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-28T16:48:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yetanotherhif/jmg_starcoder2-7b-100k | yetanotherhif | 2024-05-28T16:48:41Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"starcoder2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-28T12:20:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nicholasb00/llama3_newds | nicholasb00 | 2024-05-28T16:47:16Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:NousResearch/Meta-Llama-3-8B",
"base_model:adapter:NousResearch/Meta-Llama-3-8B",
"license:other",
"region:us"
] | null | 2024-05-28T16:47:09Z | ---
license: other
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: NousResearch/Meta-Llama-3-8B
model-index:
- name: llama3_newds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/nicholas-bianchini-unipr/huggingface/runs/1lao0bjw)
# llama3_newds
This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.3
- num_epochs: 20
### Training results
### Framework versions
- PEFT 0.11.2.dev0
- Transformers 4.42.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
vonvolous/tattoo_realism_before_LoRA | vonvolous | 2024-05-28T16:46:35Z | 9 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-25T04:12:15Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: In the style of TOK tattoo
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - vonvolous/tattoo_realism_LoRA
<Gallery />
## Model description
These are vonvolous/tattoo_realism_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use In the style of TOK tattoo to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](vonvolous/tattoo_realism_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
fine-tuned/SCIDOCS-512-192-gpt-4o-2024-05-13-591725 | fine-tuned | 2024-05-28T16:46:08Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"dataset:fine-tuned/SCIDOCS-512-192-gpt-4o-2024-05-13-591725",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-28T16:45:17Z | ---
license: apache-2.0
datasets:
- fine-tuned/SCIDOCS-512-192-gpt-4o-2024-05-13-591725
- allenai/c4
language:
- en
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**BAAI/bge-m3**](https://huggingface.co/BAAI/bge-m3) designed for the following use case:
None
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/SCIDOCS-512-192-gpt-4o-2024-05-13-591725',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf | RichardErkhov | 2024-05-28T16:46:04Z | 80 | 0 | null | [
"gguf",
"arxiv:2402.06332",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-28T04:54:44Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
internlm2-math-plus-7b - GGUF
- Model creator: https://huggingface.co/internlm/
- Original model: https://huggingface.co/internlm/internlm2-math-plus-7b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [internlm2-math-plus-7b.Q2_K.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.Q2_K.gguf) | Q2_K | 2.8GB |
| [internlm2-math-plus-7b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.IQ3_XS.gguf) | IQ3_XS | 3.1GB |
| [internlm2-math-plus-7b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.IQ3_S.gguf) | IQ3_S | 3.25GB |
| [internlm2-math-plus-7b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.Q3_K_S.gguf) | Q3_K_S | 3.24GB |
| [internlm2-math-plus-7b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.IQ3_M.gguf) | IQ3_M | 3.35GB |
| [internlm2-math-plus-7b.Q3_K.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.Q3_K.gguf) | Q3_K | 3.57GB |
| [internlm2-math-plus-7b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.Q3_K_M.gguf) | Q3_K_M | 3.57GB |
| [internlm2-math-plus-7b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.Q3_K_L.gguf) | Q3_K_L | 3.85GB |
| [internlm2-math-plus-7b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.IQ4_XS.gguf) | IQ4_XS | 3.99GB |
| [internlm2-math-plus-7b.Q4_0.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.Q4_0.gguf) | Q4_0 | 4.15GB |
| [internlm2-math-plus-7b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.IQ4_NL.gguf) | IQ4_NL | 4.19GB |
| [internlm2-math-plus-7b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.Q4_K_S.gguf) | Q4_K_S | 4.18GB |
| [internlm2-math-plus-7b.Q4_K.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.Q4_K.gguf) | Q4_K | 4.39GB |
| [internlm2-math-plus-7b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.Q4_K_M.gguf) | Q4_K_M | 4.39GB |
| [internlm2-math-plus-7b.Q4_1.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.Q4_1.gguf) | Q4_1 | 4.58GB |
| [internlm2-math-plus-7b.Q5_0.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.Q5_0.gguf) | Q5_0 | 5.0GB |
| [internlm2-math-plus-7b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.Q5_K_S.gguf) | Q5_K_S | 5.0GB |
| [internlm2-math-plus-7b.Q5_K.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.Q5_K.gguf) | Q5_K | 5.13GB |
| [internlm2-math-plus-7b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.Q5_K_M.gguf) | Q5_K_M | 5.13GB |
| [internlm2-math-plus-7b.Q5_1.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.Q5_1.gguf) | Q5_1 | 5.43GB |
| [internlm2-math-plus-7b.Q6_K.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.Q6_K.gguf) | Q6_K | 5.91GB |
| [internlm2-math-plus-7b.Q8_0.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.Q8_0.gguf) | Q8_0 | 7.66GB |
Original model description:
---
pipeline_tag: text-generation
license: other
language:
- en
- zh
tags:
- math
---
# InternLM-Math-Plus
<div align="center">
<img src="https://raw.githubusercontent.com/InternLM/InternLM/main/assets/logo.svg" width="200"/>
<div> </div>
<div align="center">
<b><font size="5">InternLM-Math</font></b>
<sup>
<a href="https://internlm.intern-ai.org.cn/">
<i><font size="4">Plus</font></i>
</a>
</sup>
<div> </div>
</div>
State-of-the-art bilingual open-sourced Math reasoning LLMs.
A **solver**, **prover**, **verifier**, **augmentor**.
[π» Github](https://github.com/InternLM/InternLM-Math) [π€ Demo](https://huggingface.co/spaces/internlm/internlm2-math-7b)
</div>
# News
- [2024.05.24] We release updated version InternLM2-Math-Plus with 4 sizes and state-of-the-art performances including 1.8B, 7B, 20B, and 8x22B. We improve informal math reasoning performance (chain-of-thought and code-intepreter) and formal math reasoning performance (LEAN 4 translation and LEAN 4 theorem proving) significantly.
- [2024.02.10] We add tech reports and citation reference.
- [2024.01.31] We add MiniF2F results with evaluation codes!
- [2024.01.29] We add checkpoints from ModelScope. Update results about majority voting and Code Intepreter. Tech report is on the way!
- [2024.01.26] We add checkpoints from OpenXLab, which ease Chinese users to download!
# Performance
## Formal Math Reasoning
We evaluate the performance of InternLM2-Math-Plus on formal math reasoning benchmark MiniF2F-test. The evaluation setting is same as Llemma with LEAN 4.
| Models | MiniF2F-test |
| -------------------------------- | ------------ |
| ReProver | 26.5 |
| LLMStep | 27.9 |
| GPT-F | 36.6 |
| HTPS | 41.0 |
| Llemma-7B | 26.2 |
| Llemma-34B | 25.8 |
| InternLM2-Math-7B-Base | 30.3 |
| InternLM2-Math-20B-Base | 29.5 |
| InternLM2-Math-Plus-1.8B | 38.9 |
| InternLM2-Math-Plus-7B | **43.4** |
| InternLM2-Math-Plus-20B | 42.6 |
| InternLM2-Math-Plus-Mixtral8x22B | 37.3 |
## Informal Math Reasoning
We evaluate the performance of InternLM2-Math-Plus on informal math reasoning benchmark MATH and GSM8K. InternLM2-Math-Plus-1.8B outperforms MiniCPM-2B in the smallest size setting. InternLM2-Math-Plus-7B outperforms Deepseek-Math-7B-RL which is the state-of-the-art math reasoning open source model. InternLM2-Math-Plus-Mixtral8x22B achieves 68.5 on MATH (with Python) and 91.8 on GSM8K.
| Model | MATH | MATH-Python | GSM8K |
| -------------------------------- | -------- | ----------- | -------- |
| MiniCPM-2B | 10.2 | - | 53.8 |
| InternLM2-Math-Plus-1.8B | **37.0** | **41.5** | **58.8** |
| InternLM2-Math-7B | 34.6 | 50.9 | 78.1 |
| Deepseek-Math-7B-RL | 51.7 | 58.8 | **88.2** |
| InternLM2-Math-Plus-7B | **53.0** | **59.7** | 85.8 |
| InternLM2-Math-20B | 37.7 | 54.3 | 82.6 |
| InternLM2-Math-Plus-20B | **53.8** | **61.8** | **87.7** |
| Mixtral8x22B-Instruct-v0.1 | 41.8 | - | 78.6 |
| Eurux-8x22B-NCA | 49.0 | - | - |
| InternLM2-Math-Plus-Mixtral8x22B | **58.1** | **68.5** | **91.8** |
We also evaluate models on [MathBench-A](https://github.com/open-compass/MathBench). InternLM2-Math-Plus-Mixtral8x22B has comparable performance compared to Claude 3 Opus.
| Model | Arithmetic | Primary | Middle | High | College | Average |
| -------------------------------- | ---------- | ------- | ------ | ---- | ------- | ------- |
| GPT-4o-0513 | 77.7 | 87.7 | 76.3 | 59.0 | 54.0 | 70.9 |
| Claude 3 Opus | 85.7 | 85.0 | 58.0 | 42.7 | 43.7 | 63.0 |
| Qwen-Max-0428 | 72.3 | 86.3 | 65.0 | 45.0 | 27.3 | 59.2 |
| Qwen-1.5-110B | 70.3 | 82.3 | 64.0 | 47.3 | 28.0 | 58.4 |
| Deepseek-V2 | 82.7 | 89.3 | 59.0 | 39.3 | 29.3 | 59.9 |
| Llama-3-70B-Instruct | 70.3 | 86.0 | 53.0 | 38.7 | 34.7 | 56.5 |
| InternLM2-Math-Plus-Mixtral8x22B | 77.5 | 82.0 | 63.6 | 50.3 | 36.8 | 62.0 |
| InternLM2-Math-20B | 58.7 | 70.0 | 43.7 | 24.7 | 12.7 | 42.0 |
| InternLM2-Math-Plus-20B | 65.8 | 79.7 | 59.5 | 47.6 | 24.8 | 55.5 |
| Llama3-8B-Instruct | 54.7 | 71.0 | 25.0 | 19.0 | 14.0 | 36.7 |
| InternLM2-Math-7B | 53.7 | 67.0 | 41.3 | 18.3 | 8.0 | 37.7 |
| Deepseek-Math-7B-RL | 68.0 | 83.3 | 44.3 | 33.0 | 23.0 | 50.3 |
| InternLM2-Math-Plus-7B | 61.4 | 78.3 | 52.5 | 40.5 | 21.7 | 50.9 |
| MiniCPM-2B | 49.3 | 51.7 | 18.0 | 8.7 | 3.7 | 26.3 |
| InternLM2-Math-Plus-1.8B | 43.0 | 43.3 | 25.4 | 18.9 | 4.7 | 27.1 |
# Citation and Tech Report
```
@misc{ying2024internlmmath,
title={InternLM-Math: Open Math Large Language Models Toward Verifiable Reasoning},
author={Huaiyuan Ying and Shuo Zhang and Linyang Li and Zhejian Zhou and Yunfan Shao and Zhaoye Fei and Yichuan Ma and Jiawei Hong and Kuikun Liu and Ziyi Wang and Yudong Wang and Zijian Wu and Shuaibin Li and Fengzhe Zhou and Hongwei Liu and Songyang Zhang and Wenwei Zhang and Hang Yan and Xipeng Qiu and Jiayu Wang and Kai Chen and Dahua Lin},
year={2024},
eprint={2402.06332},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
pchopalli/whisper-small-or-en | pchopalli | 2024-05-28T16:44:36Z | 93 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"or",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-28T16:43:31Z | ---
language:
- or
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Oriya Translate - Prashant C
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: or
split: test
args: 'config: bg, split: test'
metrics:
- name: Wer
type: wer
value: 26.790595954073265
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Oriya Translate - Prashant C
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3157
- Wer Ortho: 60.6530
- Wer: 26.7906
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:------:|:----:|:---------------:|:---------:|:-------:|
| 0.0106 | 9.6154 | 500 | 0.3157 | 60.6530 | 26.7906 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
javidanaslanli/tiny-az-tokenizer-13k | javidanaslanli | 2024-05-28T16:40:10Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-28T16:40:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Klevin/DECYPHERS-TEST-2.0 | Klevin | 2024-05-28T16:35:30Z | 138 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-28T16:28:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Weblet/llama2-7b-hf-chat-lora-v3-turbo17169127082140281_mlabonne-guanaco-llama2-1k_train | Weblet | 2024-05-28T16:34:43Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-28T16:30:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
GhostDragon01/rfp-questionnaires-test-01 | GhostDragon01 | 2024-05-28T16:32:33Z | 3 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:yleo/EmertonMonarch-7B",
"base_model:adapter:yleo/EmertonMonarch-7B",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-05-28T16:00:23Z | ---
license: cc-by-nc-4.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: yleo/EmertonMonarch-7B
datasets:
- generator
model-index:
- name: rfp-questionnaires-test-01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rfp-questionnaires-test-01
This model is a fine-tuned version of [yleo/EmertonMonarch-7B](https://huggingface.co/yleo/EmertonMonarch-7B) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 5
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2 |
fimbulvntr/vllm_model_70b | fimbulvntr | 2024-05-28T16:32:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-70b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-70b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-28T16:28:58Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-70b-bnb-4bit
---
# Uploaded model
- **Developed by:** fimbulvntr
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-70b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ClaudioItaly/TopEvolution-Q8_0-GGUF | ClaudioItaly | 2024-05-28T16:30:34Z | 1 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"base_model:merge:NousResearch/Hermes-2-Pro-Mistral-7B",
"base_model:mergekit-community/mergekit-slerp-ebgdloh",
"base_model:merge:mergekit-community/mergekit-slerp-ebgdloh",
"endpoints_compatible",
"region:us"
] | null | 2024-05-28T16:30:15Z | ---
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
base_model:
- NousResearch/Hermes-2-Pro-Mistral-7B
- mergekit-community/mergekit-slerp-ebgdloh
---
# ClaudioItaly/TopEvolution-Q8_0-GGUF
This model was converted to GGUF format from [`mergekit-community/TopEvolution`](https://huggingface.co/mergekit-community/TopEvolution) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mergekit-community/TopEvolution) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo ClaudioItaly/TopEvolution-Q8_0-GGUF --model topevolution-q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo ClaudioItaly/TopEvolution-Q8_0-GGUF --model topevolution-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && \
cd llama.cpp && \
make && \
./main -m topevolution-q8_0.gguf -n 128
```
|
RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf | RichardErkhov | 2024-05-28T16:29:31Z | 18 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-28T12:47:10Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
OpenHermes-2.5-Nebula-v2-7B - GGUF
- Model creator: https://huggingface.co/Weyaxi/
- Original model: https://huggingface.co/Weyaxi/OpenHermes-2.5-Nebula-v2-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [OpenHermes-2.5-Nebula-v2-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.Q2_K.gguf) | Q2_K | 2.53GB |
| [OpenHermes-2.5-Nebula-v2-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [OpenHermes-2.5-Nebula-v2-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [OpenHermes-2.5-Nebula-v2-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [OpenHermes-2.5-Nebula-v2-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [OpenHermes-2.5-Nebula-v2-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.Q3_K.gguf) | Q3_K | 3.28GB |
| [OpenHermes-2.5-Nebula-v2-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [OpenHermes-2.5-Nebula-v2-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [OpenHermes-2.5-Nebula-v2-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [OpenHermes-2.5-Nebula-v2-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.Q4_0.gguf) | Q4_0 | 3.83GB |
| [OpenHermes-2.5-Nebula-v2-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [OpenHermes-2.5-Nebula-v2-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [OpenHermes-2.5-Nebula-v2-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.Q4_K.gguf) | Q4_K | 4.07GB |
| [OpenHermes-2.5-Nebula-v2-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [OpenHermes-2.5-Nebula-v2-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.Q4_1.gguf) | Q4_1 | 4.24GB |
| [OpenHermes-2.5-Nebula-v2-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.Q5_0.gguf) | Q5_0 | 4.65GB |
| [OpenHermes-2.5-Nebula-v2-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [OpenHermes-2.5-Nebula-v2-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.Q5_K.gguf) | Q5_K | 4.78GB |
| [OpenHermes-2.5-Nebula-v2-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [OpenHermes-2.5-Nebula-v2-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.Q5_1.gguf) | Q5_1 | 5.07GB |
| [OpenHermes-2.5-Nebula-v2-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.Q6_K.gguf) | Q6_K | 5.53GB |
| [OpenHermes-2.5-Nebula-v2-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: cc-by-nc-4.0
datasets:
- garage-bAInd/Open-Platypus
language:
- en
---

<a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
# OpenHermes-2.5-Nebula-v2-7B
OpenHermes-2.5-Nebula-v2-7B is a merge of [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) and [PulsarAI/Nebula-v2-7B-Lora](https://huggingface.co/PulsarAI/Nebula-v2-7B-Lora)
# Evaluation Results ([Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard))
| Metric | Value |
|-----------------------|-----------|
| Avg. | |
| ARC (25-shot) | |
| HellaSwag (10-shot) | |
| MMLU (5-shot) | |
| TruthfulQA (0-shot) | |
| Winogrande (5-shot) | |
| GSM8K (5-shot) | |
| DROP (3-shot) | |
|
Yoxas/autotrain-gpt2-statistical1 | Yoxas | 2024-05-28T16:29:12Z | 137 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"autotrain",
"text-generation-inference",
"conversational",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-28T16:05:31Z | ---
tags:
- autotrain
- text-generation-inference
- text-generation
library_name: transformers
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
vuongnhathien/convnext-base-3e-5-wd-1e-8-raug | vuongnhathien | 2024-05-28T16:26:31Z | 193 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"convnextv2",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/convnextv2-base-22k-384",
"base_model:finetune:facebook/convnextv2-base-22k-384",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-28T11:03:30Z | ---
license: apache-2.0
base_model: facebook/convnextv2-base-22k-384
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: convnext-base-3e-5-wd-1e-8-raug
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9458333333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-base-3e-5-wd-1e-8-raug
This model is a fine-tuned version of [facebook/convnextv2-base-22k-384](https://huggingface.co/facebook/convnextv2-base-22k-384) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2296
- Accuracy: 0.9458
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.6237 | 1.0 | 1099 | 0.3587 | 0.8994 |
| 0.4599 | 2.0 | 2198 | 0.2743 | 0.9213 |
| 0.359 | 3.0 | 3297 | 0.2579 | 0.9252 |
| 0.3047 | 4.0 | 4396 | 0.2404 | 0.9388 |
| 0.2869 | 5.0 | 5495 | 0.2348 | 0.9408 |
| 0.2468 | 6.0 | 6594 | 0.2276 | 0.9455 |
| 0.2098 | 7.0 | 7693 | 0.2303 | 0.9471 |
| 0.1944 | 8.0 | 8792 | 0.2244 | 0.9495 |
| 0.1739 | 9.0 | 9891 | 0.2247 | 0.9507 |
| 0.1508 | 10.0 | 10990 | 0.2243 | 0.9487 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
DiederikMartens/mBERT_sa_cv_13_fold7 | DiederikMartens | 2024-05-28T16:24:55Z | 111 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-28T16:03:20Z | ---
license: apache-2.0
base_model: google-bert/bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: mBERT_sa_cv_13_fold7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mBERT_sa_cv_13_fold7
This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5312
- F1: 0.6178
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 325 | 0.5553 | 0.4855 |
| 0.5476 | 2.0 | 650 | 0.4588 | 0.5491 |
| 0.5476 | 3.0 | 975 | 0.5312 | 0.6178 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
zoedc/resume_model_3labels_final | zoedc | 2024-05-28T16:24:29Z | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-28T15:47:05Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: resume_model_3labels_final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resume_model_3labels_final
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3759
- Accuracy: 0.8333
- F1 Weighted: 0.7882
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|
| 1.0074 | 1.0 | 60 | 0.7552 | 0.7667 | 0.6835 |
| 0.693 | 2.0 | 120 | 0.6421 | 0.7333 | 0.6505 |
| 0.5233 | 3.0 | 180 | 0.3900 | 0.8333 | 0.7882 |
| 0.3459 | 4.0 | 240 | 0.3759 | 0.8333 | 0.7882 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
DiederikMartens/tsBERT_sa_cv_13_fold7 | DiederikMartens | 2024-05-28T16:23:08Z | 113 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:igorsterner/german-english-code-switching-bert",
"base_model:finetune:igorsterner/german-english-code-switching-bert",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-28T16:01:31Z | ---
license: mit
base_model: igorsterner/german-english-code-switching-bert
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: tsBERT_sa_cv_13_fold7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tsBERT_sa_cv_13_fold7
This model is a fine-tuned version of [igorsterner/german-english-code-switching-bert](https://huggingface.co/igorsterner/german-english-code-switching-bert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4860
- F1: 0.7193
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.47e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 325 | 0.4105 | 0.5852 |
| 0.4368 | 2.0 | 650 | 0.3952 | 0.6444 |
| 0.4368 | 3.0 | 975 | 0.4860 | 0.7193 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
RichardErkhov/Weyaxi_-_zephyr-beta-Nebula-v2-7B-gguf | RichardErkhov | 2024-05-28T16:22:27Z | 34 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-28T12:47:11Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
zephyr-beta-Nebula-v2-7B - GGUF
- Model creator: https://huggingface.co/Weyaxi/
- Original model: https://huggingface.co/Weyaxi/zephyr-beta-Nebula-v2-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [zephyr-beta-Nebula-v2-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-beta-Nebula-v2-7B-gguf/blob/main/zephyr-beta-Nebula-v2-7B.Q2_K.gguf) | Q2_K | 2.53GB |
| [zephyr-beta-Nebula-v2-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-beta-Nebula-v2-7B-gguf/blob/main/zephyr-beta-Nebula-v2-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [zephyr-beta-Nebula-v2-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-beta-Nebula-v2-7B-gguf/blob/main/zephyr-beta-Nebula-v2-7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [zephyr-beta-Nebula-v2-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-beta-Nebula-v2-7B-gguf/blob/main/zephyr-beta-Nebula-v2-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [zephyr-beta-Nebula-v2-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-beta-Nebula-v2-7B-gguf/blob/main/zephyr-beta-Nebula-v2-7B.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [zephyr-beta-Nebula-v2-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-beta-Nebula-v2-7B-gguf/blob/main/zephyr-beta-Nebula-v2-7B.Q3_K.gguf) | Q3_K | 3.28GB |
| [zephyr-beta-Nebula-v2-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-beta-Nebula-v2-7B-gguf/blob/main/zephyr-beta-Nebula-v2-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [zephyr-beta-Nebula-v2-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-beta-Nebula-v2-7B-gguf/blob/main/zephyr-beta-Nebula-v2-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [zephyr-beta-Nebula-v2-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-beta-Nebula-v2-7B-gguf/blob/main/zephyr-beta-Nebula-v2-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [zephyr-beta-Nebula-v2-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-beta-Nebula-v2-7B-gguf/blob/main/zephyr-beta-Nebula-v2-7B.Q4_0.gguf) | Q4_0 | 3.83GB |
| [zephyr-beta-Nebula-v2-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-beta-Nebula-v2-7B-gguf/blob/main/zephyr-beta-Nebula-v2-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [zephyr-beta-Nebula-v2-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-beta-Nebula-v2-7B-gguf/blob/main/zephyr-beta-Nebula-v2-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [zephyr-beta-Nebula-v2-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-beta-Nebula-v2-7B-gguf/blob/main/zephyr-beta-Nebula-v2-7B.Q4_K.gguf) | Q4_K | 4.07GB |
| [zephyr-beta-Nebula-v2-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-beta-Nebula-v2-7B-gguf/blob/main/zephyr-beta-Nebula-v2-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [zephyr-beta-Nebula-v2-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-beta-Nebula-v2-7B-gguf/blob/main/zephyr-beta-Nebula-v2-7B.Q4_1.gguf) | Q4_1 | 4.24GB |
| [zephyr-beta-Nebula-v2-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-beta-Nebula-v2-7B-gguf/blob/main/zephyr-beta-Nebula-v2-7B.Q5_0.gguf) | Q5_0 | 4.65GB |
| [zephyr-beta-Nebula-v2-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-beta-Nebula-v2-7B-gguf/blob/main/zephyr-beta-Nebula-v2-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [zephyr-beta-Nebula-v2-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-beta-Nebula-v2-7B-gguf/blob/main/zephyr-beta-Nebula-v2-7B.Q5_K.gguf) | Q5_K | 4.78GB |
| [zephyr-beta-Nebula-v2-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-beta-Nebula-v2-7B-gguf/blob/main/zephyr-beta-Nebula-v2-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [zephyr-beta-Nebula-v2-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-beta-Nebula-v2-7B-gguf/blob/main/zephyr-beta-Nebula-v2-7B.Q5_1.gguf) | Q5_1 | 5.07GB |
| [zephyr-beta-Nebula-v2-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-beta-Nebula-v2-7B-gguf/blob/main/zephyr-beta-Nebula-v2-7B.Q6_K.gguf) | Q6_K | 5.53GB |
| [zephyr-beta-Nebula-v2-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-beta-Nebula-v2-7B-gguf/blob/main/zephyr-beta-Nebula-v2-7B.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: cc-by-nc-4.0
datasets:
- garage-bAInd/Open-Platypus
language:
- en
---

<a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
# zephyr-beta-Nebula-v2-7B
zephyr-beta-Nebula-v2-7B is a merge of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) and [PulsarAI/Nebula-v2-7B-Lora](https://huggingface.co/PulsarAI/Nebula-v2-7B-Lora)
# Evaluation Results ([Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard))
| Metric | Value |
|-----------------------|-----------|
| Avg. | |
| ARC (25-shot) | |
| HellaSwag (10-shot) | |
| MMLU (5-shot) | |
| TruthfulQA (0-shot) | |
| Winogrande (5-shot) | |
| GSM8K (5-shot) | |
| DROP (3-shot) | |
|
Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-5_0bpw_exl2 | Zoyd | 2024-05-28T16:15:57Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dpo",
"dataset:mlabonne/orpo-dpo-mix-40k",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"5-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-28T15:45:42Z | ---
license: other
datasets:
- mlabonne/orpo-dpo-mix-40k
tags:
- dpo
---
**Exllamav2** quant (**exl2** / **5.0 bpw**) made with ExLlamaV2 v0.1.1
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-2_2bpw_exl2)**</center> | <center>3250 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-2_5bpw_exl2)**</center> | <center>3479 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-3_0bpw_exl2)**</center> | <center>3895 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-3_5bpw_exl2)**</center> | <center>4310 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-3_75bpw_exl2)**</center> | <center>4519 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-4_0bpw_exl2)**</center> | <center>4727 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-4_25bpw_exl2)**</center> | <center>4931 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-5_0bpw_exl2)**</center> | <center>5559 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-6_0bpw_exl2)**</center> | <center>6495 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-6_5bpw_exl2)**</center> | <center>6903 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-8_0bpw_exl2)**</center> | <center>8157 MB</center> | <center>8</center> |
# NeuralDaredevil-8B-abliterated

This is a DPO fine-tune of [mlabonne/Daredevil-8-abliterated](https://huggingface.co/mlabonne/Daredevil-8B-abliterated) trained on one epoch of [mlabonne/orpo-dpo-mix-40k](https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k).
## π Evaluation
### Open LLM Leaderboard
TBD.
### Nous
Evaluation performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval). See the entire leaderboard [here](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard).
| Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
|---|---:|---:|---:|---:|---:|
| [**mlabonne/NeuralDaredevil-8B-abliterated**](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated) [π](https://gist.github.com/mlabonne/ae0bf16936cef900b72964b33c99edbc) | **55.87** | **43.73** | **73.6** | **59.36** | **46.8** |
| [mlabonne/Daredevil-8B](https://huggingface.co/mlabonne/Daredevil-8B) [π](https://gist.github.com/mlabonne/080f9c5f153ea57a7ab7d932cf896f21) | 55.87 | 44.13 | 73.52 | 59.05 | 46.77 |
| [mlabonne/Daredevil-8B-abliterated](https://huggingface.co/mlabonne/Daredevil-8B-abliterated) [π](https://gist.github.com/mlabonne/32cdd8460804662c856bcb2a20acd49e) | 55.06 | 43.29 | 73.33 | 57.47 | 46.17 |
| [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B) [π](https://gist.github.com/mlabonne/5df2a3051dd6eb3368a77b684635dc05) | 54.28 | 43.9 | 72.62 | 56.36 | 44.23 |
| [openchat/openchat-3.6-8b-20240522](https://huggingface.co/openchat/openchat-3.6-8b-20240522) [π](https://gist.github.com/mlabonne/95eef8e8d26b7b17910dcb78e1c95f4a) | 53.49 | 44.03 | 73.67 | 49.78 | 46.48 |
| [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) [π](https://gist.github.com/mlabonne/8329284d86035e6019edb11eb0933628) | 51.34 | 41.22 | 69.86 | 51.65 | 42.64 |
| [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) [π](https://gist.github.com/mlabonne/616b6245137a9cfc4ea80e4c6e55d847) | 45.42 | 31.1 | 69.95 | 43.91 | 36.7 |
## π³ Model family tree
 |
fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-437825 | fine-tuned | 2024-05-28T16:15:08Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"dataset:fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-437825",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-28T16:14:14Z | ---
license: apache-2.0
datasets:
- fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-437825
- allenai/c4
language:
- en
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**BAAI/bge-m3**](https://huggingface.co/BAAI/bge-m3) designed for the following use case:
None
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-437825',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-2_5bpw_exl2 | Zoyd | 2024-05-28T16:14:40Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dpo",
"dataset:mlabonne/orpo-dpo-mix-40k",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-05-28T15:09:35Z | ---
license: other
datasets:
- mlabonne/orpo-dpo-mix-40k
tags:
- dpo
---
**Exllamav2** quant (**exl2** / **2.5 bpw**) made with ExLlamaV2 v0.1.1
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-2_2bpw_exl2)**</center> | <center>3250 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-2_5bpw_exl2)**</center> | <center>3479 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-3_0bpw_exl2)**</center> | <center>3895 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-3_5bpw_exl2)**</center> | <center>4310 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-3_75bpw_exl2)**</center> | <center>4519 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-4_0bpw_exl2)**</center> | <center>4727 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-4_25bpw_exl2)**</center> | <center>4931 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-5_0bpw_exl2)**</center> | <center>5559 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-6_0bpw_exl2)**</center> | <center>6495 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-6_5bpw_exl2)**</center> | <center>6903 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-8_0bpw_exl2)**</center> | <center>8157 MB</center> | <center>8</center> |
# NeuralDaredevil-8B-abliterated

This is a DPO fine-tune of [mlabonne/Daredevil-8-abliterated](https://huggingface.co/mlabonne/Daredevil-8B-abliterated) trained on one epoch of [mlabonne/orpo-dpo-mix-40k](https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k).
## π Evaluation
### Open LLM Leaderboard
TBD.
### Nous
Evaluation performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval). See the entire leaderboard [here](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard).
| Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
|---|---:|---:|---:|---:|---:|
| [**mlabonne/NeuralDaredevil-8B-abliterated**](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated) [π](https://gist.github.com/mlabonne/ae0bf16936cef900b72964b33c99edbc) | **55.87** | **43.73** | **73.6** | **59.36** | **46.8** |
| [mlabonne/Daredevil-8B](https://huggingface.co/mlabonne/Daredevil-8B) [π](https://gist.github.com/mlabonne/080f9c5f153ea57a7ab7d932cf896f21) | 55.87 | 44.13 | 73.52 | 59.05 | 46.77 |
| [mlabonne/Daredevil-8B-abliterated](https://huggingface.co/mlabonne/Daredevil-8B-abliterated) [π](https://gist.github.com/mlabonne/32cdd8460804662c856bcb2a20acd49e) | 55.06 | 43.29 | 73.33 | 57.47 | 46.17 |
| [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B) [π](https://gist.github.com/mlabonne/5df2a3051dd6eb3368a77b684635dc05) | 54.28 | 43.9 | 72.62 | 56.36 | 44.23 |
| [openchat/openchat-3.6-8b-20240522](https://huggingface.co/openchat/openchat-3.6-8b-20240522) [π](https://gist.github.com/mlabonne/95eef8e8d26b7b17910dcb78e1c95f4a) | 53.49 | 44.03 | 73.67 | 49.78 | 46.48 |
| [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) [π](https://gist.github.com/mlabonne/8329284d86035e6019edb11eb0933628) | 51.34 | 41.22 | 69.86 | 51.65 | 42.64 |
| [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) [π](https://gist.github.com/mlabonne/616b6245137a9cfc4ea80e4c6e55d847) | 45.42 | 31.1 | 69.95 | 43.91 | 36.7 |
## π³ Model family tree
 |
fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-859511 | fine-tuned | 2024-05-28T16:14:29Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"dataset:fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-859511",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-28T16:13:34Z | ---
license: apache-2.0
datasets:
- fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-859511
- allenai/c4
language:
- en
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**BAAI/bge-m3**](https://huggingface.co/BAAI/bge-m3) designed for the following use case:
None
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-859511',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-4_25bpw_exl2 | Zoyd | 2024-05-28T16:13:55Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dpo",
"dataset:mlabonne/orpo-dpo-mix-40k",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-05-28T15:39:53Z | ---
license: other
datasets:
- mlabonne/orpo-dpo-mix-40k
tags:
- dpo
---
**Exllamav2** quant (**exl2** / **4.25 bpw**) made with ExLlamaV2 v0.1.1
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-2_2bpw_exl2)**</center> | <center>3250 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-2_5bpw_exl2)**</center> | <center>3479 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-3_0bpw_exl2)**</center> | <center>3895 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-3_5bpw_exl2)**</center> | <center>4310 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-3_75bpw_exl2)**</center> | <center>4519 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-4_0bpw_exl2)**</center> | <center>4727 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-4_25bpw_exl2)**</center> | <center>4931 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-5_0bpw_exl2)**</center> | <center>5559 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-6_0bpw_exl2)**</center> | <center>6495 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-6_5bpw_exl2)**</center> | <center>6903 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-8_0bpw_exl2)**</center> | <center>8157 MB</center> | <center>8</center> |
# NeuralDaredevil-8B-abliterated

This is a DPO fine-tune of [mlabonne/Daredevil-8-abliterated](https://huggingface.co/mlabonne/Daredevil-8B-abliterated) trained on one epoch of [mlabonne/orpo-dpo-mix-40k](https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k).
## π Evaluation
### Open LLM Leaderboard
TBD.
### Nous
Evaluation performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval). See the entire leaderboard [here](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard).
| Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
|---|---:|---:|---:|---:|---:|
| [**mlabonne/NeuralDaredevil-8B-abliterated**](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated) [π](https://gist.github.com/mlabonne/ae0bf16936cef900b72964b33c99edbc) | **55.87** | **43.73** | **73.6** | **59.36** | **46.8** |
| [mlabonne/Daredevil-8B](https://huggingface.co/mlabonne/Daredevil-8B) [π](https://gist.github.com/mlabonne/080f9c5f153ea57a7ab7d932cf896f21) | 55.87 | 44.13 | 73.52 | 59.05 | 46.77 |
| [mlabonne/Daredevil-8B-abliterated](https://huggingface.co/mlabonne/Daredevil-8B-abliterated) [π](https://gist.github.com/mlabonne/32cdd8460804662c856bcb2a20acd49e) | 55.06 | 43.29 | 73.33 | 57.47 | 46.17 |
| [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B) [π](https://gist.github.com/mlabonne/5df2a3051dd6eb3368a77b684635dc05) | 54.28 | 43.9 | 72.62 | 56.36 | 44.23 |
| [openchat/openchat-3.6-8b-20240522](https://huggingface.co/openchat/openchat-3.6-8b-20240522) [π](https://gist.github.com/mlabonne/95eef8e8d26b7b17910dcb78e1c95f4a) | 53.49 | 44.03 | 73.67 | 49.78 | 46.48 |
| [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) [π](https://gist.github.com/mlabonne/8329284d86035e6019edb11eb0933628) | 51.34 | 41.22 | 69.86 | 51.65 | 42.64 |
| [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) [π](https://gist.github.com/mlabonne/616b6245137a9cfc4ea80e4c6e55d847) | 45.42 | 31.1 | 69.95 | 43.91 | 36.7 |
## π³ Model family tree
 |
roscazo/vih_explainability3 | roscazo | 2024-05-28T16:13:39Z | 109 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:PlanTL-GOB-ES/bsc-bio-ehr-es",
"base_model:finetune:PlanTL-GOB-ES/bsc-bio-ehr-es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-28T16:13:21Z | ---
license: apache-2.0
base_model: PlanTL-GOB-ES/bsc-bio-ehr-es
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: vih_explainability3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vih_explainability3
This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3951
- Roc Auc: 0.8213
- Ap Score: 0.7049
- Precision: 0.9836
- Recall: 0.6452
- F1: 0.7792
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Roc Auc | Ap Score | Precision | Recall | F1 |
|:-------------:|:------:|:----:|:---------------:|:-------:|:--------:|:---------:|:------:|:------:|
| 0.4261 | 0.8475 | 100 | 0.3832 | 0.6129 | 0.3793 | 1.0 | 0.2258 | 0.3684 |
| 0.2405 | 1.6949 | 200 | 0.4736 | 0.6344 | 0.4138 | 1.0 | 0.2688 | 0.4237 |
| 0.2088 | 2.5424 | 300 | 0.3452 | 0.7729 | 0.6274 | 0.9808 | 0.5484 | 0.7034 |
| 0.2196 | 3.3898 | 400 | 0.3644 | 0.7151 | 0.5431 | 1.0 | 0.4301 | 0.6015 |
| 0.2068 | 4.2373 | 500 | 0.5156 | 0.6344 | 0.4138 | 1.0 | 0.2688 | 0.4237 |
| 0.1374 | 5.0847 | 600 | 0.3988 | 0.7944 | 0.6619 | 0.9821 | 0.5914 | 0.7383 |
| 0.1098 | 5.9322 | 700 | 0.3629 | 0.8051 | 0.6791 | 0.9828 | 0.6129 | 0.7550 |
| 0.0914 | 6.7797 | 800 | 0.3394 | 0.8240 | 0.6934 | 0.9531 | 0.6559 | 0.7771 |
| 0.088 | 7.6271 | 900 | 0.3612 | 0.8334 | 0.7009 | 0.9403 | 0.6774 | 0.7875 |
| 0.0787 | 8.4746 | 1000 | 0.3801 | 0.8213 | 0.7049 | 0.9836 | 0.6452 | 0.7792 |
| 0.0588 | 9.3220 | 1100 | 0.3951 | 0.8213 | 0.7049 | 0.9836 | 0.6452 | 0.7792 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-3_0bpw_exl2 | Zoyd | 2024-05-28T16:13:37Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dpo",
"dataset:mlabonne/orpo-dpo-mix-40k",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-28T15:16:51Z | ---
license: other
datasets:
- mlabonne/orpo-dpo-mix-40k
tags:
- dpo
---
**Exllamav2** quant (**exl2** / **3.0 bpw**) made with ExLlamaV2 v0.1.1
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-2_2bpw_exl2)**</center> | <center>3250 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-2_5bpw_exl2)**</center> | <center>3479 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-3_0bpw_exl2)**</center> | <center>3895 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-3_5bpw_exl2)**</center> | <center>4310 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-3_75bpw_exl2)**</center> | <center>4519 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-4_0bpw_exl2)**</center> | <center>4727 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-4_25bpw_exl2)**</center> | <center>4931 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-5_0bpw_exl2)**</center> | <center>5559 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-6_0bpw_exl2)**</center> | <center>6495 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-6_5bpw_exl2)**</center> | <center>6903 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-8_0bpw_exl2)**</center> | <center>8157 MB</center> | <center>8</center> |
# NeuralDaredevil-8B-abliterated

This is a DPO fine-tune of [mlabonne/Daredevil-8-abliterated](https://huggingface.co/mlabonne/Daredevil-8B-abliterated) trained on one epoch of [mlabonne/orpo-dpo-mix-40k](https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k).
## π Evaluation
### Open LLM Leaderboard
TBD.
### Nous
Evaluation performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval). See the entire leaderboard [here](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard).
| Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
|---|---:|---:|---:|---:|---:|
| [**mlabonne/NeuralDaredevil-8B-abliterated**](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated) [π](https://gist.github.com/mlabonne/ae0bf16936cef900b72964b33c99edbc) | **55.87** | **43.73** | **73.6** | **59.36** | **46.8** |
| [mlabonne/Daredevil-8B](https://huggingface.co/mlabonne/Daredevil-8B) [π](https://gist.github.com/mlabonne/080f9c5f153ea57a7ab7d932cf896f21) | 55.87 | 44.13 | 73.52 | 59.05 | 46.77 |
| [mlabonne/Daredevil-8B-abliterated](https://huggingface.co/mlabonne/Daredevil-8B-abliterated) [π](https://gist.github.com/mlabonne/32cdd8460804662c856bcb2a20acd49e) | 55.06 | 43.29 | 73.33 | 57.47 | 46.17 |
| [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B) [π](https://gist.github.com/mlabonne/5df2a3051dd6eb3368a77b684635dc05) | 54.28 | 43.9 | 72.62 | 56.36 | 44.23 |
| [openchat/openchat-3.6-8b-20240522](https://huggingface.co/openchat/openchat-3.6-8b-20240522) [π](https://gist.github.com/mlabonne/95eef8e8d26b7b17910dcb78e1c95f4a) | 53.49 | 44.03 | 73.67 | 49.78 | 46.48 |
| [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) [π](https://gist.github.com/mlabonne/8329284d86035e6019edb11eb0933628) | 51.34 | 41.22 | 69.86 | 51.65 | 42.64 |
| [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) [π](https://gist.github.com/mlabonne/616b6245137a9cfc4ea80e4c6e55d847) | 45.42 | 31.1 | 69.95 | 43.91 | 36.7 |
## π³ Model family tree
 |
fine-tuned/before-finetuning-512-192-gpt-4o-2024-05-13-110174 | fine-tuned | 2024-05-28T16:13:23Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"dataset:fine-tuned/before-finetuning-512-192-gpt-4o-2024-05-13-110174",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-28T16:12:27Z | ---
license: apache-2.0
datasets:
- fine-tuned/before-finetuning-512-192-gpt-4o-2024-05-13-110174
- allenai/c4
language:
- en
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**BAAI/bge-m3**](https://huggingface.co/BAAI/bge-m3) designed for the following use case:
None
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/before-finetuning-512-192-gpt-4o-2024-05-13-110174',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
Kovalev/aya23_8B_kazparc | Kovalev | 2024-05-28T16:10:16Z | 0 | 0 | null | [
"safetensors",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-05-28T16:09:26Z | ---
license: cc-by-nc-4.0
---
|
Toshifumi/Llama3-IMDB_20240528v1 | Toshifumi | 2024-05-28T16:08:11Z | 4 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-28T16:02:49Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** Toshifumi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-418918 | fine-tuned | 2024-05-28T16:06:47Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"dataset:fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-418918",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-28T16:06:12Z | ---
license: apache-2.0
datasets:
- fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-418918
- allenai/c4
language:
- en
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case:
None
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-418918',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
ybelkada/tiny-random-llama-Q6_K-GGUF | ybelkada | 2024-05-28T16:06:31Z | 6 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"endpoints_compatible",
"region:us"
] | null | 2024-05-28T16:06:30Z | ---
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
---
# ybelkada/tiny-random-llama-Q6_K-GGUF
This model was converted to GGUF format from [`ybelkada/tiny-random-llama`](https://huggingface.co/ybelkada/tiny-random-llama) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ybelkada/tiny-random-llama) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo ybelkada/tiny-random-llama-Q6_K-GGUF --model tiny-random-llama.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo ybelkada/tiny-random-llama-Q6_K-GGUF --model tiny-random-llama.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tiny-random-llama.Q6_K.gguf -n 128
```
|
MoTHer-VTHR/VTHR-LoRA-V-ModelTree_4-Depth_2-Node_nuB5P4de | MoTHer-VTHR | 2024-05-28T16:06:30Z | 170 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-28T16:06:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MoTHer-VTHR/VTHR-LoRA-V-ModelTree_4-Depth_2-Node_ehobdK3q | MoTHer-VTHR | 2024-05-28T16:05:59Z | 166 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-28T15:48:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MoTHer-VTHR/VTHR-LoRA-V-ModelTree_4-Depth_2-Node_Kb6teTEK | MoTHer-VTHR | 2024-05-28T16:05:52Z | 166 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-28T15:48:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-935443 | fine-tuned | 2024-05-28T16:05:51Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"dataset:fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-935443",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-28T16:05:19Z | ---
license: apache-2.0
datasets:
- fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-935443
- allenai/c4
language:
- en
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case:
None
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-935443',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-186741 | fine-tuned | 2024-05-28T16:05:51Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"dataset:fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-186741",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-28T16:05:17Z | ---
license: apache-2.0
datasets:
- fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-186741
- allenai/c4
language:
- en
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case:
None
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-186741',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
Subsets and Splits