modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-28 18:27:08
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 501
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-28 18:25:37
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Johnie2Turbo/llama-13b_adv_text | Johnie2Turbo | 2024-05-20T05:27:35Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"ru",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T05:20:06Z | ---
language:
- ru
---
Умеет писать рекламные тексты и рекламные объявления для машин и ноутбуков )
Правильное оформление инструкции:
[INST] {prompt} [/INST]
Обучался на инструкция "Напиши рекламный текст для ..." и "Напиши рекламное объявление для ..." |
Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-6_0bpw_exl2 | Zoyd | 2024-05-20T05:26:58Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:aqua_rat",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"arxiv:2402.13228",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"6-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-20T05:03:27Z | ---
library_name: transformers
license: llama2
datasets:
- aqua_rat
- microsoft/orca-math-word-problems-200k
- m-a-p/CodeFeedback-Filtered-Instruction
---
**Exllamav2** quant (**exl2** / **6.0 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-2_2bpw_exl2)**</center> | <center>20886 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-2_5bpw_exl2)**</center> | <center>23192 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-3_0bpw_exl2)**</center> | <center>27273 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-3_5bpw_exl2)**</center> | <center>31356 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-3_75bpw_exl2)**</center> | <center>33398 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-4_0bpw_exl2)**</center> | <center>35434 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-4_25bpw_exl2)**</center> | <center>37456 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-5_0bpw_exl2)**</center> | <center>43598 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-6_0bpw_exl2)**</center> | <center>51957 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-6_5bpw_exl2)**</center> | <center>56014 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/abacusai_Smaug-Llama-3-70B-Instruct-8_0bpw_exl2)**</center> | <center>60211 MB</center> | <center>8</center> |
# Smaug-Llama-3-70B-Instruct
### Built with Meta Llama 3

This model was built using a new Smaug recipe for improving performance on real world multi-turn conversations applied to
[meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct).
The model outperforms Llama-3-70B-Instruct substantially, and is on par with GPT-4-Turbo, on MT-Bench (see below).
EDIT: Smaug-Llama-3-70B-Instruct is the top open source model on Arena-Hard currently! It is also nearly on par with Claude Opus - see below.
We are conducting additional benchmark evaluations and will add those when available.
### Model Description
- **Developed by:** [Abacus.AI](https://abacus.ai)
- **License:** https://llama.meta.com/llama3/license/
- **Finetuned from model:** [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct).
## Evaluation
### Arena-Hard
Score vs selected others (sourced from: (https://lmsys.org/blog/2024-04-19-arena-hard/#full-leaderboard-with-gpt-4-turbo-as-judge))
| Model | Score | 95% Confidence Interval | Average Tokens |
| :---- | ---------: | ----------: | ------: |
| GPT-4-Turbo-2024-04-09 | 82.6 | (-1.8, 1.6) | 662 |
| Claude-3-Opus-20240229 | 60.4 | (-3.3, 2.4) | 541 |
| **Smaug-Llama-3-70B-Instruct** | 56.7 | (-2.2, 2.6) | 661 |
| GPT-4-0314 | 50.0 | (-0.0, 0.0) | 423 |
| Claude-3-Sonnet-20240229 | 46.8 | (-2.1, 2.2) | 552 |
| Llama-3-70B-Instruct | 41.1 | (-2.5, 2.4) | 583 |
| GPT-4-0613 | 37.9 | (-2.2, 2.0) | 354 |
| Mistral-Large-2402 | 37.7 | (-1.9, 2.6) | 400 |
| Mixtral-8x22B-Instruct-v0.1 | 36.4 | (-2.7, 2.9) | 430 |
| Qwen1.5-72B-Chat | 36.1 | (-2.5, 2.2) | 474 |
| Command-R-Plus | 33.1 | (-2.1, 2.2) | 541 |
| Mistral-Medium | 31.9 | (-2.3, 2.4) | 485 |
| GPT-3.5-Turbo-0613 | 24.8 | (-1.6, 2.0) | 401 |
### MT-Bench
```
########## First turn ##########
score
model turn
Smaug-Llama-3-70B-Instruct 1 9.40000
GPT-4-Turbo 1 9.37500
Meta-Llama-3-70B-Instruct 1 9.21250
########## Second turn ##########
score
model turn
Smaug-Llama-3-70B-Instruct 2 9.0125
GPT-4-Turbo 2 9.0000
Meta-Llama-3-70B-Instruct 2 8.8000
########## Average ##########
score
model
Smaug-Llama-3-70B-Instruct 9.206250
GPT-4-Turbo 9.187500
Meta-Llama-3-70B-Instruct 9.006250
```
| Model | First turn | Second Turn | Average |
| :---- | ---------: | ----------: | ------: |
| **Smaug-Llama-3-70B-Instruct** | 9.40 | 9.01 | 9.21 |
| GPT-4-Turbo | 9.38 | 9.00 | 9.19 |
| Meta-Llama-3-70B-Instruct | 9.21 | 8.80 | 9.01 |
This version of Smaug uses new techniques and new data compared to [Smaug-72B](https://huggingface.co/abacusai/Smaug-72B-v0.1), and more information will be released later on. For now, see the previous Smaug paper: https://arxiv.org/abs/2402.13228. |
baldwin6/Bolaco | baldwin6 | 2024-05-20T05:24:52Z | 0 | 2 | null | [
"region:us"
] | null | 2024-05-16T08:30:48Z | Checkpoints for the Bolaco. The code is available at https://github.com/Dereck0602/Bolaco.
|
danieljhand/distilbert-base-uncased-finetuned-wine | danieljhand | 2024-05-20T05:24:43Z | 123 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-17T02:07:53Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-wine
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-wine
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9082
- Accuracy: 0.7314
- F1: 0.7222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.6559 | 1.0 | 1101 | 1.0917 | 0.6792 | 0.6623 |
| 1.0185 | 2.0 | 2202 | 0.9466 | 0.7214 | 0.7103 |
| 0.8851 | 3.0 | 3303 | 0.9082 | 0.7314 | 0.7222 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
jayasuryajsk/Llama-3-merge-4.2B | jayasuryajsk | 2024-05-20T05:21:45Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T05:17:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
leafspark/Yi-1.5-34B-Chat-16K-Q4_K_M-GGUF | leafspark | 2024-05-20T05:19:40Z | 6 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-20T05:18:51Z | ---
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# leafspark/Yi-1.5-34B-Chat-16K-Q4_K_M-GGUF
This model was converted to GGUF format from [`01-ai/Yi-1.5-34B-Chat-16K`](https://huggingface.co/01-ai/Yi-1.5-34B-Chat-16K) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/01-ai/Yi-1.5-34B-Chat-16K) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo leafspark/Yi-1.5-34B-Chat-16K-Q4_K_M-GGUF --model yi-1.5-34b-chat-16k.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo leafspark/Yi-1.5-34B-Chat-16K-Q4_K_M-GGUF --model yi-1.5-34b-chat-16k.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m yi-1.5-34b-chat-16k.Q4_K_M.gguf -n 128
```
|
Hemg/token-classification | Hemg | 2024-05-20T05:16:39Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:wnut_17",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-05-20T05:12:42Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- wnut_17
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: token-classification
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wnut_17
type: wnut_17
config: wnut_17
split: test
args: wnut_17
metrics:
- name: Precision
type: precision
value: 0.5268630849220104
- name: Recall
type: recall
value: 0.28174235403151066
- name: F1
type: f1
value: 0.36714975845410625
- name: Accuracy
type: accuracy
value: 0.939506647856013
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# token-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the wnut_17 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2813
- Precision: 0.5269
- Recall: 0.2817
- F1: 0.3671
- Accuracy: 0.9395
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 107 | 0.3021 | 0.4217 | 0.1548 | 0.2264 | 0.9342 |
| No log | 2.0 | 214 | 0.2813 | 0.5269 | 0.2817 | 0.3671 | 0.9395 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
sidddddddddddd/llama-3-8b-kub | sidddddddddddd | 2024-05-20T05:13:01Z | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-20T05:13:00Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** sidddddddddddd
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tsavage68/MedQA_L3_150steps_1e6rate_03beat_CSFTDPO | tsavage68 | 2024-05-20T05:04:49Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T05:01:00Z | ---
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: MedQA_L3_150steps_1e6rate_03beat_CSFTDPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MedQA_L3_150steps_1e6rate_03beat_CSFTDPO
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5020
- Rewards/chosen: -0.9020
- Rewards/rejected: -1.9172
- Rewards/accuracies: 0.7297
- Rewards/margins: 1.0152
- Logps/rejected: -27.7072
- Logps/chosen: -21.2293
- Logits/rejected: -1.0337
- Logits/chosen: -1.0327
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.7072 | 0.0489 | 50 | 0.6474 | 0.1422 | 0.0242 | 0.6505 | 0.1180 | -21.2360 | -17.7487 | -0.9397 | -0.9391 |
| 0.6194 | 0.0977 | 100 | 0.5755 | -0.5279 | -1.1917 | 0.6989 | 0.6638 | -25.2888 | -19.9824 | -1.0174 | -1.0166 |
| 0.5632 | 0.1466 | 150 | 0.5020 | -0.9020 | -1.9172 | 0.7297 | 1.0152 | -27.7072 | -21.2293 | -1.0337 | -1.0327 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.0.0+cu117
- Datasets 2.19.1
- Tokenizers 0.19.1
|
ridham1317/whisper-small-ft-common-voice | ridham1317 | 2024-05-20T05:03:37Z | 148 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-20T05:02:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AAA01101312/distilbert-base-uncased-finetuned-clinc | AAA01101312 | 2024-05-20T05:02:02Z | 8 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-16T01:54:19Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7977
- Accuracy: 0.9155
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.3065 | 0.7232 |
| 3.8097 | 2.0 | 636 | 1.8930 | 0.8487 |
| 3.8097 | 3.0 | 954 | 1.1797 | 0.8913 |
| 1.7181 | 4.0 | 1272 | 0.8830 | 0.9113 |
| 0.9252 | 5.0 | 1590 | 0.7977 | 0.9155 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cpu
- Datasets 2.19.1
- Tokenizers 0.15.2
|
omezzinemariem/mistral-text-to-RULE2 | omezzinemariem | 2024-05-20T05:01:14Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | null | 2024-05-20T05:01:01Z | ---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
BothBosu/cnn-scam-classifier-v1.3 | BothBosu | 2024-05-20T04:57:30Z | 51 | 0 | transformers | [
"transformers",
"safetensors",
"pytorch_model_hub_mixin",
"model_hub_mixin",
"endpoints_compatible",
"region:us"
] | null | 2024-05-20T04:57:28Z | ---
tags:
- pytorch_model_hub_mixin
- model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
BothBosu/bilstm-scam-classifier-v1.3 | BothBosu | 2024-05-20T04:54:56Z | 53 | 0 | transformers | [
"transformers",
"safetensors",
"pytorch_model_hub_mixin",
"model_hub_mixin",
"endpoints_compatible",
"region:us"
] | null | 2024-05-20T04:54:49Z | ---
tags:
- pytorch_model_hub_mixin
- model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
damingli09/ppo-LunarLander-v2 | damingli09 | 2024-05-20T04:51:12Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-20T04:50:52Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.46 +/- 17.31
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Ruiz3/phi-2-kingshipAI-product-explainer | Ruiz3 | 2024-05-20T04:47:09Z | 132 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T04:26:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ahmeda335/experimentGA1 | ahmeda335 | 2024-05-20T04:39:10Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:OpenPipe/mistral-ft-optimized-1218",
"base_model:merge:OpenPipe/mistral-ft-optimized-1218",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:merge:mlabonne/NeuralHermes-2.5-Mistral-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T04:11:42Z | ---
base_model:
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
library_name: transformers
tags:
- mergekit
- merge
---
# mergeGA2
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: OpenPipe/mistral-ft-optimized-1218
layer_range: [0, 32]
- sources:
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
layer_range: [24, 32]
merge_method: passthrough
dtype: bfloat16
```
|
OwOpeepeepoopoo/NoSoup4U2 | OwOpeepeepoopoo | 2024-05-20T04:36:33Z | 14 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-19T00:25:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
expertlearning/vision-perceiver-fourier | expertlearning | 2024-05-20T04:33:20Z | 112 | 0 | transformers | [
"transformers",
"pytorch",
"perceiver",
"image-classification",
"dataset:imagenet",
"arxiv:2107.14795",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-20T04:31:14Z | ---
license: apache-2.0
datasets:
- imagenet
---
# Perceiver IO for vision (fixed Fourier position embeddings)
Perceiver IO model pre-trained on ImageNet (14 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Jaegle et al. and first released in [this repository](https://github.com/deepmind/deepmind-research/tree/master/perceiver).
Disclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Perceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs.
To decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For image classification, the output is a tensor containing the logits, of shape (batch_size, num_labels).
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/perceiver_architecture.jpg" alt="drawing" width="600"/>
<small> Perceiver IO architecture.</small>
As the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors can train the model directly on raw pixel values, rather than on patches as is done in ViT. This particular model only adds fixed Fourier 2D position embeddings to the pixel values.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by replacing the classification decoder.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=deepmind/perceiver) to look for other fine-tuned versions on a task that may interest you.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import PerceiverImageProcessor, PerceiverForImageClassificationFourier
import requests
from PIL import Image
processor = PerceiverImageProcessor.from_pretrained("deepmind/vision-perceiver-fourier")
model = PerceiverForImageClassificationFourier.from_pretrained("deepmind/vision-perceiver-fourier")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
# prepare input
inputs = processor(image, return_tensors="pt").pixel_values
# forward pass
outputs = model(inputs)
logits = outputs.logits
print("Predicted class:", model.config.id2label[logits.argmax(-1).item()])
>>> should print Predicted class: tabby, tabby cat
```
## Training data
This model was pretrained on [ImageNet](http://www.image-net.org/), a dataset consisting of 14 million images and 1k classes.
## Training procedure
### Preprocessing
Images are center cropped and resized to a resolution of 224x224 and normalized across the RGB channels. Note that data augmentation was used during pre-training, as explained in Appendix H of the [paper](https://arxiv.org/abs/2107.14795).
### Pretraining
Hyperparameter details can be found in Appendix H of the [paper](https://arxiv.org/abs/2107.14795).
## Evaluation results
This model is able to achieve a top-1 accuracy of 79.0 on ImageNet-1k, and 84.5 when pre-trained on a large-scale dataset (JFT-300M, an internal dataset of Google).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2107-14795,
author = {Andrew Jaegle and
Sebastian Borgeaud and
Jean{-}Baptiste Alayrac and
Carl Doersch and
Catalin Ionescu and
David Ding and
Skanda Koppula and
Daniel Zoran and
Andrew Brock and
Evan Shelhamer and
Olivier J. H{\'{e}}naff and
Matthew M. Botvinick and
Andrew Zisserman and
Oriol Vinyals and
Jo{\~{a}}o Carreira},
title = {Perceiver {IO:} {A} General Architecture for Structured Inputs {\&}
Outputs},
journal = {CoRR},
volume = {abs/2107.14795},
year = {2021},
url = {https://arxiv.org/abs/2107.14795},
eprinttype = {arXiv},
eprint = {2107.14795},
timestamp = {Tue, 03 Aug 2021 14:53:34 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2107-14795.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
Raneechu/miningsmall | Raneechu | 2024-05-20T04:31:31Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2024-05-20T04:31:27Z | ---
license: llama2
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: meta-llama/Llama-2-7b-hf
model-index:
- name: miningsmall
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# miningsmall
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.1.1+cu121
- Datasets 2.14.5
- Tokenizers 0.19.1
## Training procedure
### Framework versions
- PEFT 0.6.2
|
Raneechu/mininglit | Raneechu | 2024-05-20T04:30:01Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2024-05-20T04:29:58Z | ---
license: llama2
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: meta-llama/Llama-2-7b-hf
model-index:
- name: mininglit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mininglit
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.1.1+cu121
- Datasets 2.14.5
- Tokenizers 0.19.1
## Training procedure
### Framework versions
- PEFT 0.6.2
|
ghost613/llama8_on_korean_summary | ghost613 | 2024-05-20T04:27:02Z | 3 | 1 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:beomi/Llama-3-Open-Ko-8B-Instruct-preview",
"base_model:adapter:beomi/Llama-3-Open-Ko-8B-Instruct-preview",
"license:other",
"region:us"
] | null | 2024-05-16T19:22:39Z | ---
license: other
library_name: peft
tags:
- generated_from_trainer
base_model: beomi/Llama-3-Open-Ko-8B-Instruct-preview
model-index:
- name: llama8_on_korean_summary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama8_on_korean_summary
This model is a fine-tuned version of [beomi/Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8536
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 5
- total_train_batch_size: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 760
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7767 | 0.26 | 20 | 1.5096 |
| 1.2382 | 0.53 | 40 | 1.0459 |
| 0.9915 | 0.79 | 60 | 0.9451 |
| 0.9126 | 1.05 | 80 | 0.8893 |
| 0.8501 | 1.32 | 100 | 0.8515 |
| 0.8113 | 1.58 | 120 | 0.8192 |
| 0.8019 | 1.84 | 140 | 0.7939 |
| 0.7239 | 2.11 | 160 | 0.7795 |
| 0.6621 | 2.37 | 180 | 0.7594 |
| 0.6457 | 2.63 | 200 | 0.7433 |
| 0.6417 | 2.89 | 220 | 0.7281 |
| 0.5929 | 3.16 | 240 | 0.7305 |
| 0.5245 | 3.42 | 260 | 0.7242 |
| 0.5291 | 3.68 | 280 | 0.7154 |
| 0.528 | 3.95 | 300 | 0.7109 |
| 0.4696 | 4.21 | 320 | 0.7257 |
| 0.4474 | 4.47 | 340 | 0.7251 |
| 0.4572 | 4.74 | 360 | 0.7252 |
| 0.4391 | 5.0 | 380 | 0.7202 |
| 0.3794 | 5.26 | 400 | 0.7462 |
| 0.3771 | 5.53 | 420 | 0.7568 |
| 0.3754 | 5.79 | 440 | 0.7453 |
| 0.3739 | 6.05 | 460 | 0.7597 |
| 0.3179 | 6.32 | 480 | 0.7803 |
| 0.3328 | 6.58 | 500 | 0.7699 |
| 0.3259 | 6.84 | 520 | 0.7710 |
| 0.3014 | 7.11 | 540 | 0.8083 |
| 0.2759 | 7.37 | 560 | 0.8017 |
| 0.2758 | 7.63 | 580 | 0.7954 |
| 0.2798 | 7.89 | 600 | 0.8003 |
| 0.2545 | 8.16 | 620 | 0.8325 |
| 0.2451 | 8.42 | 640 | 0.8282 |
| 0.2355 | 8.68 | 660 | 0.8318 |
| 0.2382 | 8.95 | 680 | 0.8300 |
| 0.2256 | 9.21 | 700 | 0.8544 |
| 0.212 | 9.47 | 720 | 0.8532 |
| 0.2108 | 9.74 | 740 | 0.8529 |
| 0.2125 | 10.0 | 760 | 0.8536 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.2
- Pytorch 2.1.2+cu121
- Datasets 2.17.0
- Tokenizers 0.15.0 |
ukung/Nusantara-4b-Indo-Chat-GGUF | ukung | 2024-05-20T04:26:42Z | 97 | 0 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-20T02:43:48Z | ---
license: apache-2.0
---
|
ukung/Nusantara-2.7b-Indo-Chat-v0.2-GGUF | ukung | 2024-05-20T04:14:23Z | 4 | 0 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-20T03:38:55Z | ---
license: apache-2.0
---
|
akiseid/AmharicNewsNonCleanedNonWeighted | akiseid | 2024-05-20T04:13:43Z | 118 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-20T03:12:02Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: AmharicNewsNonCleanedNonWeighted
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AmharicNewsNonCleanedNonWeighted
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1726
- Accuracy: 0.9564
- Precision: 0.9563
- Recall: 0.9564
- F1: 0.9564
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.2237 | 1.0 | 945 | 0.2308 | 0.9054 | 0.9145 | 0.9054 | 0.9031 |
| 0.3067 | 2.0 | 1890 | 0.1760 | 0.9384 | 0.9388 | 0.9384 | 0.9379 |
| 0.143 | 3.0 | 2835 | 0.1510 | 0.9480 | 0.9486 | 0.9480 | 0.9482 |
| 0.1306 | 4.0 | 3780 | 0.1550 | 0.9544 | 0.9547 | 0.9544 | 0.9544 |
| 0.0825 | 5.0 | 4725 | 0.1726 | 0.9564 | 0.9563 | 0.9564 | 0.9564 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
abc88767/4sc102 | abc88767 | 2024-05-20T04:13:16Z | 121 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T04:11:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Raneechu/mininglarge | Raneechu | 2024-05-20T04:11:28Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2024-05-20T04:11:24Z | ---
license: llama2
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: meta-llama/Llama-2-7b-hf
model-index:
- name: mininglarge
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mininglarge
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.1.1+cu121
- Datasets 2.14.5
- Tokenizers 0.19.1
## Training procedure
### Framework versions
- PEFT 0.6.2
|
abc88767/22c102 | abc88767 | 2024-05-20T04:09:35Z | 98 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T04:08:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ukung/Nusantara-7b-Indo-Chat-GGUF | ukung | 2024-05-20T04:08:48Z | 6 | 0 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-20T02:42:44Z | ---
license: apache-2.0
---
|
abc88767/9sc102 | abc88767 | 2024-05-20T04:04:52Z | 97 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T04:03:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
abc88767/8sc102 | abc88767 | 2024-05-20T04:01:24Z | 93 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T03:59:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tsavage68/MedQA_L3_600steps_1e7rate_01beta_CSFTDPO | tsavage68 | 2024-05-20T03:58:43Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T02:46:28Z | ---
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: MedQA_L3_600steps_1e7rate_01beta_CSFTDPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MedQA_L3_600steps_1e7rate_01beta_CSFTDPO
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6692
- Rewards/chosen: 0.0482
- Rewards/rejected: -0.0053
- Rewards/accuracies: 0.6681
- Rewards/margins: 0.0535
- Logps/rejected: -21.3695
- Logps/chosen: -17.7404
- Logits/rejected: -0.9398
- Logits/chosen: -0.9393
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 600
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6951 | 0.0489 | 50 | 0.6935 | 0.0003 | 0.0009 | 0.4901 | -0.0006 | -21.3079 | -18.2196 | -0.9258 | -0.9253 |
| 0.6892 | 0.0977 | 100 | 0.6881 | 0.0374 | 0.0268 | 0.6044 | 0.0106 | -21.0482 | -17.8488 | -0.9281 | -0.9276 |
| 0.6801 | 0.1466 | 150 | 0.6794 | 0.0588 | 0.0292 | 0.6418 | 0.0296 | -21.0241 | -17.6343 | -0.9314 | -0.9309 |
| 0.6807 | 0.1954 | 200 | 0.6767 | 0.0584 | 0.0227 | 0.6549 | 0.0358 | -21.0897 | -17.6383 | -0.9345 | -0.9339 |
| 0.6829 | 0.2443 | 250 | 0.6726 | 0.0560 | 0.0106 | 0.6571 | 0.0454 | -21.2109 | -17.6631 | -0.9367 | -0.9362 |
| 0.6656 | 0.2931 | 300 | 0.6715 | 0.0540 | 0.0059 | 0.6505 | 0.0481 | -21.2575 | -17.6830 | -0.9382 | -0.9376 |
| 0.6955 | 0.3420 | 350 | 0.6697 | 0.0524 | 0.0002 | 0.6571 | 0.0522 | -21.3145 | -17.6986 | -0.9384 | -0.9378 |
| 0.6605 | 0.3908 | 400 | 0.6697 | 0.0493 | -0.0031 | 0.6505 | 0.0524 | -21.3476 | -17.7294 | -0.9393 | -0.9388 |
| 0.6718 | 0.4397 | 450 | 0.6689 | 0.0495 | -0.0047 | 0.6527 | 0.0541 | -21.3631 | -17.7279 | -0.9396 | -0.9390 |
| 0.6734 | 0.4885 | 500 | 0.6687 | 0.0486 | -0.0059 | 0.6505 | 0.0545 | -21.3751 | -17.7362 | -0.9397 | -0.9392 |
| 0.6525 | 0.5374 | 550 | 0.6691 | 0.0482 | -0.0056 | 0.6615 | 0.0537 | -21.3720 | -17.7410 | -0.9398 | -0.9393 |
| 0.6637 | 0.5862 | 600 | 0.6692 | 0.0482 | -0.0053 | 0.6681 | 0.0535 | -21.3695 | -17.7404 | -0.9398 | -0.9393 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.0.0+cu117
- Datasets 2.19.1
- Tokenizers 0.19.1
|
abc88767/2c102 | abc88767 | 2024-05-20T03:57:26Z | 92 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T03:55:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
OjciecTadeusz/dreamshaper8 | OjciecTadeusz | 2024-05-20T03:54:39Z | 4 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:Lykon/DreamShaper",
"base_model:adapter:Lykon/DreamShaper",
"region:us"
] | text-to-image | 2024-05-20T03:50:52Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: photo of landscape, high quality
output:
url: images/image (35).png
base_model: Lykon/DreamShaper
instance_prompt: null
---
# dreamshaper8
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/JasonJwba/dreamshaper8/tree/main) them in the Files & versions tab.
|
afiqlol/Malay-Sentiment | afiqlol | 2024-05-20T03:52:40Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:citizenlab/twitter-xlm-roberta-base-sentiment-finetunned",
"base_model:finetune:citizenlab/twitter-xlm-roberta-base-sentiment-finetunned",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-20T02:11:43Z | ---
base_model: citizenlab/twitter-xlm-roberta-base-sentiment-finetunned
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Malay-Sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Malay-Sentiment
This model is a fine-tuned version of [citizenlab/twitter-xlm-roberta-base-sentiment-finetunned](https://huggingface.co/citizenlab/twitter-xlm-roberta-base-sentiment-finetunned) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5757
- Accuracy: 0.7578
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7066 | 1.0 | 723 | 0.5757 | 0.7578 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cpu
- Datasets 2.14.5
- Tokenizers 0.15.0
|
PQlet/textual-inversion-v2-ablation-vec3-img1 | PQlet | 2024-05-20T03:48:22Z | 3 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-05-20T01:50:18Z | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
base_model: runwayml/stable-diffusion-v1-5
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Textual Inversion training - PQlet/textual-inversion-v2-ablation-vec3-img1
The generated images are below.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
RichardErkhov/lodrick-the-lafted_-_Grafted-Hermetic-Platypus-C-2x7B-4bits | RichardErkhov | 2024-05-20T03:39:17Z | 78 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-20T03:34:05Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Grafted-Hermetic-Platypus-C-2x7B - bnb 4bits
- Model creator: https://huggingface.co/lodrick-the-lafted/
- Original model: https://huggingface.co/lodrick-the-lafted/Grafted-Hermetic-Platypus-C-2x7B/
Original model description:
---
license: apache-2.0
datasets:
- lodrick-the-lafted/Hermes-217K
- garage-bAInd/Open-Platypus
- jondurbin/airoboros-3.2
model-index:
- name: Grafted-Hermetic-Platypus-C-2x7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 58.96
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-C-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 82.77
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-C-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.08
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-C-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 60.87
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-C-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.74
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-C-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 43.9
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lodrick-the-lafted/Grafted-Hermetic-Platypus-C-2x7B
name: Open LLM Leaderboard
---
<img src=https://huggingface.co/lodrick-the-lafted/Grafted-Hermetic-Platypus-C-2x7B/resolve/main/ghp.png>
# Grafted-Hermetic-Platypus-C-2x7B
MoE merge of
- [Platyboros-Instruct-7B](https://huggingface.co/lodrick-the-lafted/Platyboros-Instruct-7B)
- [Hermes-Instruct-7B-217K](https://huggingface.co/lodrick-the-lafted/Hermes-Instruct-7B-217K)
<br />
<br />
# Prompt Format
Both the default Mistral-Instruct tags and Alpaca are fine, so either:
```
<s>[INST] {sys_prompt} {instruction} [/INST]
```
or
```
{sys_prompt}
### Instruction:
{instruction}
### Response:
```
The tokenizer default is Alpaca this time around.
<br />
<br />
# Usage
```python
from transformers import AutoTokenizer
import transformers
import torch
model = "lodrick-the-lafted/Grafted-Hermetic-Platypus-A-2x7B"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.bfloat16},
)
messages = [{"role": "user", "content": "Give me a cooking recipe for an peach pie."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lodrick-the-lafted__Grafted-Hermetic-Platypus-C-2x7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |64.39|
|AI2 Reasoning Challenge (25-Shot)|58.96|
|HellaSwag (10-Shot) |82.77|
|MMLU (5-Shot) |62.08|
|TruthfulQA (0-shot) |60.87|
|Winogrande (5-shot) |77.74|
|GSM8k (5-shot) |43.90|
|
RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf | RichardErkhov | 2024-05-20T03:24:51Z | 32 | 0 | null | [
"gguf",
"arxiv:2304.12244",
"arxiv:2306.08568",
"arxiv:2308.09583",
"endpoints_compatible",
"region:us"
] | null | 2024-05-20T02:05:20Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
WizardLM-2-7B - GGUF
- Model creator: https://huggingface.co/dreamgen/
- Original model: https://huggingface.co/dreamgen/WizardLM-2-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [WizardLM-2-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.Q2_K.gguf) | Q2_K | 2.53GB |
| [WizardLM-2-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [WizardLM-2-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [WizardLM-2-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [WizardLM-2-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [WizardLM-2-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.Q3_K.gguf) | Q3_K | 3.28GB |
| [WizardLM-2-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [WizardLM-2-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [WizardLM-2-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [WizardLM-2-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.Q4_0.gguf) | Q4_0 | 3.83GB |
| [WizardLM-2-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [WizardLM-2-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [WizardLM-2-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.Q4_K.gguf) | Q4_K | 4.07GB |
| [WizardLM-2-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [WizardLM-2-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.Q4_1.gguf) | Q4_1 | 4.24GB |
| [WizardLM-2-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.Q5_0.gguf) | Q5_0 | 4.65GB |
| [WizardLM-2-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [WizardLM-2-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.Q5_K.gguf) | Q5_K | 4.78GB |
| [WizardLM-2-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [WizardLM-2-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.Q5_1.gguf) | Q5_1 | 5.07GB |
| [WizardLM-2-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.Q6_K.gguf) | Q6_K | 5.53GB |
| [WizardLM-2-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/dreamgen_-_WizardLM-2-7B-gguf/blob/main/WizardLM-2-7B.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: apache-2.0
---
<p style="font-size:20px;" align="center">
🏠 <a href="https://wizardlm.github.io/WizardLM2" target="_blank">WizardLM-2 Release Blog</a> </p>
<p align="center">
🤗 <a href="https://huggingface.co/collections/microsoft/wizardlm-2-661d403f71e6c8257dbd598a" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/victorsungo/WizardLM/tree/main/WizardLM-2" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br>
</p>
<p align="center">
👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a>
</p>
## News 🔥🔥🔥 [2024/04/15]
We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models,
which have improved performance on complex chat, multilingual, reasoning and agent.
New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.
- WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works
and consistently outperforms all the existing state-of-the-art opensource models.
- WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size.
- WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.
For more details of WizardLM-2 please read our [release blog post](https://wizardlm.github.io/WizardLM2) and upcoming paper.
## Model Details
* **Model name**: WizardLM-2 7B
* **Developed by**: WizardLM@Microsoft AI
* **Base model**: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
* **Parameters**: 7B
* **Language(s)**: Multilingual
* **Blog**: [Introducing WizardLM-2](https://wizardlm.github.io/WizardLM2)
* **Repository**: [https://github.com/nlpxucan/WizardLM](https://github.com/nlpxucan/WizardLM)
* **Paper**: WizardLM-2 (Upcoming)
* **License**: Apache2.0
## Model Capacities
**MT-Bench**
We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models.
The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models.
Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/mtbench.png" alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
**Human Preferences Evaluation**
We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual.
We report the win:loss rate without tie:
- WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.
- WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.
- WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/winall.png" alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Method Overview
We built a **fully AI powered synthetic training system** to train WizardLM-2 models, please refer to our [blog](https://wizardlm.github.io/WizardLM2) for more details of this system.
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/exp_1.png" alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Usage
❗<b>Note for model system prompts usage:</b>
<b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following:
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful,
detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s>
USER: Who are you? ASSISTANT: I am WizardLM.</s>......
```
<b> Inference WizardLM-2 Demo Script</b>
We provide a WizardLM-2 inference demo [code](https://github.com/nlpxucan/WizardLM/tree/main/demo) on our github.
|
AleRothermel/mi-1.2-model | AleRothermel | 2024-05-20T03:24:20Z | 113 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-20T02:16:21Z | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: bert-base-cased
metrics:
- accuracy
model-index:
- name: mi-1.2-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mi-1.2-model
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7264
- Accuracy: 0.58
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.6501 | 0.04 | 10 | 1.6095 | 0.235 |
| 1.655 | 0.08 | 20 | 1.5876 | 0.23 |
| 1.6465 | 0.12 | 30 | 1.5874 | 0.305 |
| 1.6577 | 0.16 | 40 | 1.6006 | 0.2325 |
| 1.5666 | 0.2 | 50 | 1.5611 | 0.245 |
| 1.5667 | 0.24 | 60 | 1.4245 | 0.44 |
| 1.4837 | 0.28 | 70 | 1.2916 | 0.4175 |
| 1.2603 | 0.32 | 80 | 1.3869 | 0.3925 |
| 1.2865 | 0.36 | 90 | 1.4055 | 0.3475 |
| 1.4037 | 0.4 | 100 | 1.3934 | 0.32 |
| 1.3201 | 0.44 | 110 | 1.4511 | 0.4125 |
| 1.3977 | 0.48 | 120 | 1.2251 | 0.44 |
| 1.1444 | 0.52 | 130 | 1.1517 | 0.5175 |
| 1.1627 | 0.56 | 140 | 1.1211 | 0.5225 |
| 1.21 | 0.6 | 150 | 1.1336 | 0.53 |
| 1.2211 | 0.64 | 160 | 1.4186 | 0.4 |
| 1.2985 | 0.68 | 170 | 1.1251 | 0.4725 |
| 1.1856 | 0.72 | 180 | 1.1138 | 0.5075 |
| 1.1027 | 0.76 | 190 | 1.0810 | 0.5075 |
| 1.0998 | 0.8 | 200 | 1.1034 | 0.5225 |
| 1.2546 | 0.84 | 210 | 1.1205 | 0.4925 |
| 1.0265 | 0.88 | 220 | 1.1996 | 0.4925 |
| 1.0898 | 0.92 | 230 | 1.1002 | 0.515 |
| 1.19 | 0.96 | 240 | 1.0805 | 0.4925 |
| 1.1456 | 1.0 | 250 | 1.0509 | 0.525 |
| 0.9265 | 1.04 | 260 | 1.1092 | 0.51 |
| 0.8554 | 1.08 | 270 | 1.0098 | 0.5325 |
| 0.8695 | 1.12 | 280 | 1.0991 | 0.4975 |
| 0.8505 | 1.16 | 290 | 1.0827 | 0.5075 |
| 0.8892 | 1.2 | 300 | 1.1195 | 0.52 |
| 0.8982 | 1.24 | 310 | 1.0691 | 0.51 |
| 0.9301 | 1.28 | 320 | 1.0236 | 0.545 |
| 1.052 | 1.32 | 330 | 1.0296 | 0.535 |
| 0.8072 | 1.3600 | 340 | 1.0227 | 0.55 |
| 0.8822 | 1.4 | 350 | 1.0494 | 0.53 |
| 1.1561 | 1.44 | 360 | 1.2036 | 0.4925 |
| 0.9526 | 1.48 | 370 | 1.0443 | 0.56 |
| 0.9916 | 1.52 | 380 | 1.0378 | 0.555 |
| 1.0388 | 1.56 | 390 | 1.0920 | 0.5375 |
| 0.9326 | 1.6 | 400 | 1.0510 | 0.5375 |
| 0.8453 | 1.6400 | 410 | 1.1247 | 0.5025 |
| 1.03 | 1.6800 | 420 | 1.0281 | 0.565 |
| 0.971 | 1.72 | 430 | 1.0322 | 0.54 |
| 0.941 | 1.76 | 440 | 0.9858 | 0.565 |
| 0.8615 | 1.8 | 450 | 0.9793 | 0.555 |
| 0.8815 | 1.8400 | 460 | 0.9778 | 0.56 |
| 0.7658 | 1.88 | 470 | 0.9760 | 0.56 |
| 1.0073 | 1.92 | 480 | 1.0747 | 0.5175 |
| 0.8929 | 1.96 | 490 | 0.9910 | 0.565 |
| 0.9089 | 2.0 | 500 | 1.0512 | 0.535 |
| 0.5102 | 2.04 | 510 | 1.0545 | 0.555 |
| 0.6748 | 2.08 | 520 | 1.1621 | 0.5175 |
| 0.5222 | 2.12 | 530 | 1.1038 | 0.5575 |
| 0.7978 | 2.16 | 540 | 1.1728 | 0.53 |
| 0.6749 | 2.2 | 550 | 1.1029 | 0.5475 |
| 0.6621 | 2.24 | 560 | 1.0977 | 0.5425 |
| 0.6808 | 2.2800 | 570 | 1.1776 | 0.545 |
| 0.5728 | 2.32 | 580 | 1.1747 | 0.5325 |
| 0.75 | 2.36 | 590 | 1.1707 | 0.5275 |
| 0.6622 | 2.4 | 600 | 1.1082 | 0.555 |
| 0.6008 | 2.44 | 610 | 1.0922 | 0.57 |
| 0.6491 | 2.48 | 620 | 1.1375 | 0.545 |
| 0.5876 | 2.52 | 630 | 1.0614 | 0.5675 |
| 0.5326 | 2.56 | 640 | 1.0460 | 0.58 |
| 0.4901 | 2.6 | 650 | 1.0864 | 0.58 |
| 0.6151 | 2.64 | 660 | 1.1919 | 0.58 |
| 0.6478 | 2.68 | 670 | 1.1301 | 0.5575 |
| 0.4841 | 2.7200 | 680 | 1.1451 | 0.58 |
| 0.6365 | 2.76 | 690 | 1.0701 | 0.575 |
| 0.5284 | 2.8 | 700 | 1.1674 | 0.5325 |
| 0.6506 | 2.84 | 710 | 1.1016 | 0.55 |
| 0.6446 | 2.88 | 720 | 1.1340 | 0.57 |
| 0.5193 | 2.92 | 730 | 1.1692 | 0.525 |
| 0.6129 | 2.96 | 740 | 1.1717 | 0.5325 |
| 0.6013 | 3.0 | 750 | 1.1374 | 0.55 |
| 0.3392 | 3.04 | 760 | 1.2702 | 0.515 |
| 0.3188 | 3.08 | 770 | 1.2584 | 0.515 |
| 0.3272 | 3.12 | 780 | 1.3520 | 0.5225 |
| 0.341 | 3.16 | 790 | 1.2752 | 0.5575 |
| 0.3826 | 3.2 | 800 | 1.3126 | 0.55 |
| 0.3062 | 3.24 | 810 | 1.4909 | 0.52 |
| 0.2657 | 3.2800 | 820 | 1.3804 | 0.5575 |
| 0.4609 | 3.32 | 830 | 1.3712 | 0.5625 |
| 0.3388 | 3.36 | 840 | 1.4701 | 0.5275 |
| 0.3007 | 3.4 | 850 | 1.3373 | 0.57 |
| 0.2732 | 3.44 | 860 | 1.3699 | 0.575 |
| 0.4551 | 3.48 | 870 | 1.3874 | 0.555 |
| 0.3048 | 3.52 | 880 | 1.4913 | 0.5625 |
| 0.4104 | 3.56 | 890 | 1.4586 | 0.565 |
| 0.2633 | 3.6 | 900 | 1.4353 | 0.565 |
| 0.4435 | 3.64 | 910 | 1.5246 | 0.555 |
| 0.282 | 3.68 | 920 | 1.6866 | 0.5275 |
| 0.5918 | 3.7200 | 930 | 1.5193 | 0.5525 |
| 0.315 | 3.76 | 940 | 1.4276 | 0.565 |
| 0.1276 | 3.8 | 950 | 1.4411 | 0.5625 |
| 0.3389 | 3.84 | 960 | 1.5420 | 0.5625 |
| 0.3248 | 3.88 | 970 | 1.4492 | 0.575 |
| 0.3051 | 3.92 | 980 | 1.4321 | 0.5925 |
| 0.3363 | 3.96 | 990 | 1.4374 | 0.5825 |
| 0.4602 | 4.0 | 1000 | 1.4581 | 0.57 |
| 0.1582 | 4.04 | 1010 | 1.4434 | 0.5675 |
| 0.2344 | 4.08 | 1020 | 1.4551 | 0.5975 |
| 0.2646 | 4.12 | 1030 | 1.4999 | 0.59 |
| 0.1948 | 4.16 | 1040 | 1.5550 | 0.5625 |
| 0.3058 | 4.2 | 1050 | 1.5955 | 0.5775 |
| 0.1569 | 4.24 | 1060 | 1.5721 | 0.575 |
| 0.1777 | 4.28 | 1070 | 1.6241 | 0.56 |
| 0.1256 | 4.32 | 1080 | 1.5711 | 0.575 |
| 0.2467 | 4.36 | 1090 | 1.5735 | 0.59 |
| 0.1964 | 4.4 | 1100 | 1.5924 | 0.585 |
| 0.0578 | 4.44 | 1110 | 1.6353 | 0.585 |
| 0.1358 | 4.48 | 1120 | 1.6710 | 0.5775 |
| 0.174 | 4.52 | 1130 | 1.6733 | 0.5725 |
| 0.2022 | 4.5600 | 1140 | 1.6658 | 0.585 |
| 0.028 | 4.6 | 1150 | 1.6708 | 0.585 |
| 0.1222 | 4.64 | 1160 | 1.6989 | 0.5875 |
| 0.2295 | 4.68 | 1170 | 1.7131 | 0.5825 |
| 0.374 | 4.72 | 1180 | 1.7197 | 0.5725 |
| 0.1342 | 4.76 | 1190 | 1.7237 | 0.575 |
| 0.079 | 4.8 | 1200 | 1.7267 | 0.58 |
| 0.154 | 4.84 | 1210 | 1.7204 | 0.585 |
| 0.0403 | 4.88 | 1220 | 1.7183 | 0.58 |
| 0.1964 | 4.92 | 1230 | 1.7253 | 0.5775 |
| 0.1297 | 4.96 | 1240 | 1.7252 | 0.5775 |
| 0.0834 | 5.0 | 1250 | 1.7264 | 0.58 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Noursene/whisper-small-2000 | Noursene | 2024-05-20T03:18:45Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-20T02:40:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SiyuK/gpt2-reuters-tokenizer | SiyuK | 2024-05-20T03:17:28Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-20T03:17:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
theglassofwater/mistral_pretraining_2.2ksteps_16batch | theglassofwater | 2024-05-20T03:10:58Z | 188 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T03:10:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
danhergir/platzi | danhergir | 2024-05-20T03:03:13Z | 194 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:AI-Lab-Makerere/beans",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-03-20T04:26:50Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- AI-Lab-Makerere/beans
metrics:
- accuracy
base_model: google/vit-base-patch16-224-in21k
model-index:
- name: platzi
results:
- task:
type: image-classification
name: Image Classification
dataset:
name: beans
type: beans
config: default
split: validation
args: default
metrics:
- type: accuracy
value: 0.9924812030075187
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0317
- Accuracy: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.136 | 3.85 | 500 | 0.0317 | 0.9925 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2
|
quydau/dummy | quydau | 2024-05-20T03:03:10Z | 108 | 0 | transformers | [
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-20T02:30:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DucPhanBa/Vietnamese_Llama2 | DucPhanBa | 2024-05-20T03:01:21Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2024-05-20T02:57:47Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
MinhViet/bartpho-quesntion-without-content | MinhViet | 2024-05-20T02:56:16Z | 178 | 0 | transformers | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-18T09:24:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
coolguyleo/harry-20 | coolguyleo | 2024-05-20T02:32:30Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:microsoft/Llama2-7b-WhoIsHarryPotter",
"base_model:adapter:microsoft/Llama2-7b-WhoIsHarryPotter",
"license:other",
"region:us"
] | null | 2024-05-20T02:32:12Z | ---
license: other
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: microsoft/Llama2-7b-WhoIsHarryPotter
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [microsoft/Llama2-7b-WhoIsHarryPotter](https://huggingface.co/microsoft/Llama2-7b-WhoIsHarryPotter) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 20
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 |
BillShih/Getac_F110 | BillShih | 2024-05-20T02:32:17Z | 0 | 0 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-20T02:24:26Z | ---
license: apache-2.0
---
|
ukung/Nusantara-0.8b-Indo-Chat-GGUF | ukung | 2024-05-20T02:26:23Z | 7 | 0 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-20T02:12:22Z | ---
license: apache-2.0
---
|
animaRegem/gemma-2b-malayalam-model-vllm-4bit | animaRegem | 2024-05-20T02:24:53Z | 77 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/gemma-2b-bnb-4bit",
"base_model:quantized:unsloth/gemma-2b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-07T18:25:16Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
- sft
base_model: unsloth/gemma-2b-bnb-4bit
---
# Uploaded model
- **Developed by:** animaRegem
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
appvoid/palmer-002-32k | appvoid | 2024-05-20T02:24:07Z | 139 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-20T03:28:35Z | ---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
---

# palmer
### a better base model
This model is palmer-002-2401 scaled to 32k by merging and fine-tuning with TinyLlama-1.1B-32k-Instruct by Doctor-Shotgun
### evaluation 🧪
note that this is a zero-shot setting as opposite to open llm leaderboard's few-shot evals
```
model ARC-C OBQA HellaSwag PIQA Winogrande Average
tinyllama | 0.3029 | 0.3600 | 0.5935 | 0.7329 | 0.5959 | 0.5170 |
palmer-002-2401 | 0.3294 | 0.3700 | 0.5950 | 0.7399 | 0.5896 | 0.5247 |
palmer-002-32k | 0.3268 | 0.3780 | 0.5785 | 0.7492 | 0.6251 | 0.5315 | (this)
babbage-002 | 0.3285 | 0.3620 | 0.6380 | 0.7606 | 0.6085 | 0.5395 |
```
This model's performance is close to openai's one while being capable of using 2x the context size.
### prompt 📝
```
no prompt 🚀
```
<a href="https://ko-fi.com/appvoid" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 48px !important;width: 180px !important; filter: invert(70%);" ></a> |
animaRegem/gemma-2b-malayalam-model-vllm-16bit | animaRegem | 2024-05-20T02:23:15Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/gemma-2b-bnb-4bit",
"base_model:finetune:unsloth/gemma-2b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-07T18:22:07Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
- sft
base_model: unsloth/gemma-2b-bnb-4bit
---
# Uploaded model
- **Developed by:** animaRegem
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Sorour/cls_sentiment_phi3_v1 | Sorour | 2024-05-20T02:22:20Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"region:us"
] | null | 2024-05-20T01:45:59Z | ---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: microsoft/Phi-3-mini-4k-instruct
datasets:
- generator
model-index:
- name: cls_sentiment_phi3_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cls_sentiment_phi3_v1
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7122
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9066 | 0.2083 | 50 | 0.9011 |
| 0.854 | 0.4167 | 100 | 0.8419 |
| 0.787 | 0.625 | 150 | 0.8062 |
| 0.7476 | 0.8333 | 200 | 0.7764 |
| 0.7141 | 1.0417 | 250 | 0.7636 |
| 0.6989 | 1.25 | 300 | 0.7528 |
| 0.6482 | 1.4583 | 350 | 0.7397 |
| 0.6537 | 1.6667 | 400 | 0.7207 |
| 0.6526 | 1.875 | 450 | 0.7122 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
imagepipeline/detailed_perfection | imagepipeline | 2024-05-20T02:21:26Z | 0 | 0 | null | [
"imagepipeline",
"imagepipeline.io",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-05-20T02:21:23Z | ---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
## detailed_perfection
<img src="https://via.placeholder.com/468x300?text=App+Screenshot+Here" alt="Generated on Image Pipeline" style="border-radius: 10px;">
**This lora model is uploaded on [imagepipeline.io](https://imagepipeline.io/)**
Model details - detailed perfection
[](https://imagepipeline.io/models/detailed_perfection?id=5dad2b0b-51db-4bfc-aaca-e431a5add399/)
## How to try this model ?
You can try using it locally or send an API call to test the output quality.
Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/). No payment required.
Coding in `php` `javascript` `node` etc ? Checkout our documentation
[](https://docs.imagepipeline.io/docs/introduction)
```python
import requests
import json
url = "https://imagepipeline.io/sd/text2image/v1/run"
payload = json.dumps({
"model_id": "sd1.5",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": false,
"guidance_scale": 7.5,
"multi_lingual": "no",
"embeddings": "",
"lora_models": "5dad2b0b-51db-4bfc-aaca-e431a5add399",
"lora_weights": "0.5"
})
headers = {
'Content-Type': 'application/json',
'API-Key': 'your_api_key'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
}
```
Get more ready to use `MODELS` like this for `SD 1.5` and `SDXL` :
[](https://imagepipeline.io/models)
### API Reference
#### Generate Image
```http
https://api.imagepipeline.io/sd/text2image/v1
```
| Headers | Type | Description |
|:----------------------| :------- |:-------------------------------------------------------------------------------------------------------------------|
| `API-Key` | `str` | Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/) |
| `Content-Type` | `str` | application/json - content type of the request body |
| Parameter | Type | Description |
| :-------- | :------- | :------------------------- |
| `model_id` | `str` | Your base model, find available lists in [models page](https://imagepipeline.io/models) or upload your own|
| `prompt` | `str` | Text Prompt. Check our [Prompt Guide](https://docs.imagepipeline.io/docs/SD-1.5/docs/extras/prompt-guide) for tips |
| `num_inference_steps` | `int [1-50]` | Noise is removed with each step, resulting in a higher-quality image over time. Ideal value 30-50 (without LCM) |
| `guidance_scale` | `float [1-20]` | Higher guidance scale prioritizes text prompt relevance but sacrifices image quality. Ideal value 7.5-12.5 |
| `lora_models` | `str, array` | Pass the model_id(s) of LoRA models that can be found in models page |
| `lora_weights` | `str, array` | Strength of the LoRA effect |
---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
### Feedback
If you have any feedback, please reach out to us at [email protected]
#### 🔗 Visit Website
[](https://imagepipeline.io/)
If you are the original author of this model, please [click here](https://airtable.com/apprTaRnJbDJ8ufOx/shr4g7o9B6fWfOlUR) to add credits
|
PQlet/textual-inversion-v2-ablation-vec3-img3 | PQlet | 2024-05-20T02:19:56Z | 10 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-05-20T02:19:54Z | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
base_model: runwayml/stable-diffusion-v1-5
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Textual Inversion training - PQlet/textual-inversion-v2-ablation-vec3-img3
The generated images are below.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
Emily666666/bert-base-cased-news-category-test | Emily666666 | 2024-05-20T02:19:15Z | 109 | 1 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-17T07:56:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tanganke/gpt2_mrpc | tanganke | 2024-05-20T02:15:46Z | 228 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-classification",
"dataset:nyu-mll/glue",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-20T02:01:40Z | ---
datasets:
- nyu-mll/glue
metrics:
- accuracy
basemodel:
- openai-community/gpt2
--- |
tanganke/gpt2_qnli | tanganke | 2024-05-20T02:15:21Z | 211 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-classification",
"dataset:nyu-mll/glue",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-20T02:01:54Z | ---
datasets:
- nyu-mll/glue
metrics:
- accuracy
basemodel:
- openai-community/gpt2
--- |
tanganke/gpt2_mnli | tanganke | 2024-05-20T02:14:46Z | 229 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-classification",
"dataset:nyu-mll/glue",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-20T02:00:34Z | ---
datasets:
- nyu-mll/glue
metrics:
- accuracy
basemodel:
- openai-community/gpt2
--- |
tanganke/gpt2_cola | tanganke | 2024-05-20T02:11:53Z | 243 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-classification",
"dataset:nyu-mll/glue",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-20T01:59:49Z | ---
datasets:
- nyu-mll/glue
metrics:
- accuracy
basemodel:
- openai-community/gpt2
---
# Model Card for Model ID
GPT2 model fine-tuned on cola from GLUE benchmark, using a learning rate of 5e-5 for 3 epochs.
|
jrc/phi3-mini-math | jrc | 2024-05-20T02:11:01Z | 10 | 1 | transformers | [
"transformers",
"phi3",
"text-generation",
"torchtune",
"minerva-math",
"conversational",
"custom_code",
"en",
"dataset:TIGER-Lab/MATH-plus",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-19T23:57:38Z | ---
license: apache-2.0
datasets:
- TIGER-Lab/MATH-plus
language:
- en
tags:
- torchtune
- minerva-math
library_name: transformers
pipeline_tag: text-generation
---
# jrc/phi3-mini-math
<!-- Provide a quick summary of what the model is/does. -->
Math majors - who needs em? This model can answer any math questions you have.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("jrc/phi3-mini-math", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("jrc/phi3-mini-math", trust_remote_code=True)
```
## Training Details
Phi3 was trained using [torchtune](https://github.com/pytorch/torchtune) and the training script + config file are located in this repository.
```bash
tune run lora_finetune_distributed.py --config mini_lora.yaml
```
You can see a full Weights & Biases run [here](https://api.wandb.ai/links/jcummings/hkey76vj).
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
This model was finetuned on the following datasets:
* [TIGER-Lab/MATH-plus](https://huggingface.co/datasets/TIGER-Lab/MATH-plus): An advanced math-specific dataset with 894k samples.
#### Hardware
* Machines: 4 x NVIDIA A100 GPUs
* Max VRAM used per GPU: 29 GB
* Real time: 10 hours
## Evaluation
The finetuned model is evaluated on [minerva-math](https://research.google/blog/minerva-solving-quantitative-reasoning-problems-with-language-models/) using [EleutherAI Eval Harness](https://github.com/EleutherAI/lm-evaluation-harness) through torchtune.
```bash
tune run eleuther_eval --config eleuther_evaluation \
checkpoint.checkpoint_dir=./lora-phi3-math \
tasks=["minerva_math"] \
batch_size=32
```
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|------------------------------------|-------|------|-----:|-----------|-----:|---|-----:|
|minerva_math |N/A |none | 4|exact_match|0.1670|± |0.0051|
| - minerva_math_algebra | 1|none | 4|exact_match|0.2502|± |0.0126|
| - minerva_math_counting_and_prob | 1|none | 4|exact_match|0.1329|± |0.0156|
| - minerva_math_geometry | 1|none | 4|exact_match|0.1232|± |0.0150|
| - minerva_math_intermediate_algebra| 1|none | 4|exact_match|0.0576|± |0.0078|
| - minerva_math_num_theory | 1|none | 4|exact_match|0.1148|± |0.0137|
| - minerva_math_prealgebra | 1|none | 4|exact_match|0.3077|± |0.0156|
| - minerva_math_precalc | 1|none | 4|exact_match|0.0623|± |0.0104|
This shows a large improvement over the base Phi3 Mini model.
## Model Card Contact
Drop me a line at @official_j3rck |
Mitsua/elan-mt-bt-ja-en | Mitsua | 2024-05-20T01:56:57Z | 619 | 6 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"translation",
"ja",
"en",
"dataset:Mitsua/wikidata-parallel-descriptions-en-ja",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2024-05-20T01:56:12Z | ---
license: cc-by-sa-4.0
datasets:
- Mitsua/wikidata-parallel-descriptions-en-ja
language:
- ja
- en
metrics:
- bleu
- chrf
library_name: transformers
pipeline_tag: translation
---
# ElanMT
[**ElanMT-BT-ja-en**](https://huggingface.co/Mitsua/elan-mt-bt-ja-en) is a Japanese to English translation model developed by [ELAN MITSUA Project](https://elanmitsua.com/en/) / Abstract Engine.
- [**ElanMT-base-ja-en**](https://huggingface.co/Mitsua/elan-mt-base-ja-en) and [**ElanMT-base-en-ja**](https://huggingface.co/Mitsua/elan-mt-base-en-ja) are trained from scratch, exclusively on openly licensed corpora such as CC0, CC BY and CC BY-SA.
- This model is a fine-tuned checkpoint of **ElanMT-base-ja-en** and is trained exclusively on openly licensed data and Wikipedia back translated data using **ElanMT-base-en-ja**.
- Web crawled or other machine translated corpora are **not** used during the entire training procedure for the **ElanMT** models.
Despite the relatively low resource training, thanks to back-translation and [a newly built CC0 corpus](https://huggingface.co/datasets/Mitsua/wikidata-parallel-descriptions-en-ja),
the model achieved comparable performance to the currently available open translation models.
## Model Details
This is a translation model based on [Marian MT](https://marian-nmt.github.io/) 6-layer encoder-decoder transformer architecture with sentencepiece tokenizer.
- **Developed by**: [ELAN MITSUA Project](https://elanmitsua.com/en/) / Abstract Engine
- **Model type**: Translation
- **Source Language**: Japanese
- **Target Language**: English
- **License**: [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)
## Usage
1. Install the python packages
`pip install transformers accelerate sentencepiece`
* This model is verified on `transformers==4.40.2`
2. Run
```python
from transformers import pipeline
translator = pipeline('translation', model='Mitsua/elan-mt-bt-ja-en')
translator('こんにちは。私はAIです。')
```
3. For longer multiple sentences, using [pySBD](https://github.com/nipunsadvilkar/pySBD) is recommended.
`pip install transformers accelerate sentencepiece pysbd`
```python
import pysbd
seg = pysbd.Segmenter(language="ja", clean=False)
txt = 'こんにちは。私はAIです。お元気ですか?'
print(translator(seg.segment(txt)))
```
This idea is from [FuguMT](https://huggingface.co/staka/fugumt-ja-en) repo.
## Training Data
We heavily referred [FuguMT author's blog post](https://staka.jp/wordpress/?p=413) for dataset collection.
- [Mitsua/wikidata-parallel-descriptions-en-ja](https://huggingface.co/datasets/Mitsua/wikidata-parallel-descriptions-en-ja) (CC0 1.0)
- We newly built this 1.5M lines wikidata parallel corpus to augment the training data. This greatly improved the vocabulary on a word basis.
- [The Kyoto Free Translation Task (KFTT)](https://www.phontron.com/kftt/) (CC BY-SA 3.0)
- Graham Neubig, "The Kyoto Free Translation Task," http://www.phontron.com/kftt, 2011.
- [Tatoeba](https://tatoeba.org/en/downloads) (CC BY 2.0 FR / CC0 1.0)
- https://tatoeba.org/
- [wikipedia-interlanguage-titles](https://github.com/bhaddow/wikipedia-interlanguage-titles) (The MIT License / CC BY-SA 4.0)
- We built parallel titles based on 2024-05-06 wikipedia dump.
- [WikiMatrix](https://github.com/facebookresearch/LASER/tree/main/tasks/WikiMatrix) (CC BY-SA 4.0)
- Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong and Francisco Guzmán, "WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia"
- [MDN Web Docs](https://github.com/mdn/translated-content) (The MIT / CC0 1.0 / CC BY-SA 2.5)
- https://github.com/mdn/translated-content
- [Wikimedia contenttranslation dump](https://dumps.wikimedia.org/other/contenttranslation/) (CC BY-SA 4.0)
- 2024-5-10 dump is used.
*Even if the dataset itself is CC-licensed, we did not use it if the corpus contained in the dataset is based on web crawling, is based on unauthorized use of copyrighted works, or is based on the machine translation output of other translation models.
## Training Procedure
We heavily referred "[Beating Edinburgh's WMT2017 system for en-de with Marian's Transformer model](https://github.com/marian-nmt/marian-examples/tree/master/wmt2017-transformer)"
for training process and hyperparameter tuning.
1. Trains a sentencepiece tokenizer 32k vocab on 4M lines openly licensed corpus.
2. Trains `en-ja` back-translation model on 4M lines openly licensed corpus for 6 epochs. = **ElanMT-base-en-ja**
3. Trains `ja-en` base translation model on 4M lines openly licensed corpus for 6 epochs. = **ElanMT-base-ja-en**
4. Translates 20M lines `en` Wikipedia to `ja` using back-translation model.
5. Trains 4 `ja-en` models, which is finetuned from **ElanMT-base-ja-en** checkpoint, on 24M lines training data augmented with back-translated data for 6 epochs.
6. Merges 4 trained models that produces the best validation score on FLORES+ dev split.
7. Finetunes the merged model on 1M lines high quality corpus subset for 5 epochs.
## Evaluation
### Dataset
- [FLORES+](https://github.com/openlanguagedata/flores) (CC BY-SA 4.0) devtest split is used for evaluation.
- [NTREX](https://github.com/MicrosoftTranslator/NTREX) (CC BY-SA 4.0)
### Result
| **Model** | **Params** | **FLORES+ BLEU** | **FLORES+ chrf** | **NTREX BLEU** | **NTREX chrf** |
|:---|---:|---:|---:|---:|---:|
| [**ElanMT-BT**](https://huggingface.co/Mitsua/elan-mt-bt-ja-en) | 61M | 24.87 | 55.02 | 22.57 | 52.48|
| [**ElanMT-base**](https://huggingface.co/Mitsua/elan-mt-base-ja-en) | 61M | 21.61 | 52.53 | 18.43 | 49.09|
| [**ElanMT-tiny**](https://huggingface.co/Mitsua/elan-mt-tiny-ja-en) | 15M | 20.40 | 51.81 | 18.43 | 49.39|
| [staka/fugumt-ja-en](https://huggingface.co/staka/fugumt-ja-en) | 61M | 24.10 | 54.97 | 22.33 | 51.84|
| [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) | 610M | 23.88 | 53.98 | 22.59 | 51.57|
| [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M) | 615M | 22.92 | 52.13 | 22.59 | 51.36|
| [facebook/nllb-200-3.3B](https://huggingface.co/facebook/nllb-200-3.3B) | 3B | 28.13 | 56.86 | 27.65 | 55.60|
| [google/madlad400-3b-mt](https://huggingface.co/google/madlad400-3b-mt) | 3B | 26.95 | 56.62 | 26.11 | 54.61|
| [google/madlad400-7b-mt](https://huggingface.co/google/madlad400-7b-mt) | 7B | 28.84 | 57.46 | 28.19 | 55.85|
- *1 tested on `transformers==4.29.2` and `num_beams=4`
- *2 BLEU score is calculated by `sacreBLEU`
## Disclaimer
- The translated result may be very incorrect, harmful or biased. The model was developed to investigate achievable performance with only a relatively small, licensed corpus, and is not suitable for use cases requiring high translation accuracy. Under Section 5 of the CC BY-SA 4.0 License, ELAN MITSUA Project / Abstract Engine is not responsible for any direct or indirect loss caused by the use of the model.
- 免責事項:翻訳結果は不正確で、有害であったりバイアスがかかっている可能性があります。本モデルは比較的小規模でライセンスされたコーパスのみで達成可能な性能を調査するために開発されたモデルであり、翻訳の正確性が必要なユースケースでの使用には適していません。絵藍ミツアプロジェクト及び株式会社アブストラクトエンジンはCC BY-SA 4.0ライセンス第5条に基づき、本モデルの使用によって生じた直接的または間接的な損失に対して、一切の責任を負いません。 |
animaRegem/gemma-2b-malayalam-model-adaptors | animaRegem | 2024-05-20T01:54:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-2b-bnb-4bit",
"base_model:finetune:unsloth/gemma-2b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-07T18:14:06Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: unsloth/gemma-2b-bnb-4bit
---
# Uploaded model
- **Developed by:** animaRegem
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Mitsua/elan-mt-bt-en-ja | Mitsua | 2024-05-20T01:53:38Z | 626 | 8 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"translation",
"ja",
"en",
"dataset:Mitsua/wikidata-parallel-descriptions-en-ja",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2024-05-20T01:51:18Z | ---
license: cc-by-sa-4.0
datasets:
- Mitsua/wikidata-parallel-descriptions-en-ja
language:
- ja
- en
metrics:
- bleu
- chrf
library_name: transformers
pipeline_tag: translation
---
# ElanMT
[**ElanMT-BT-en-ja**](https://huggingface.co/Mitsua/elan-mt-bt-en-ja) is a English to Japanese translation model developed by [ELAN MITSUA Project](https://elanmitsua.com/en/) / Abstract Engine.
- [**ElanMT-base-en-ja**](https://huggingface.co/Mitsua/elan-mt-base-en-ja) and [**ElanMT-base-ja-en**](https://huggingface.co/Mitsua/elan-mt-base-ja-en) are trained from scratch, exclusively on openly licensed corpora such as CC0, CC BY and CC BY-SA.
- This model is a fine-tuned checkpoint of **ElanMT-base-en-ja** and is trained exclusively on openly licensed data and Wikipedia back translated data using **ElanMT-base-ja-en**.
- Web crawled or other machine translated corpora are **not** used during the entire training procedure for the **ElanMT** models.
Despite the relatively low resource training, thanks to back-translation and [a newly built CC0 corpus](https://huggingface.co/datasets/Mitsua/wikidata-parallel-descriptions-en-ja),
the model achieved comparable performance to the currently available open translation models.
## Model Details
This is a translation model based on [Marian MT](https://marian-nmt.github.io/) 6-layer encoder-decoder transformer architecture with sentencepiece tokenizer.
- **Developed by**: [ELAN MITSUA Project](https://elanmitsua.com/en/) / Abstract Engine
- **Model type**: Translation
- **Source Language**: English
- **Target Language**: Japanese
- **License**: [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)
## Usage
1. Install the python packages
`pip install transformers accelerate sentencepiece`
* This model is verified on `transformers==4.40.2`
2. Run
```python
from transformers import pipeline
translator = pipeline('translation', model='Mitsua/elan-mt-bt-en-ja')
translator('Hello. I am an AI.')
```
3. For longer multiple sentences, using [pySBD](https://github.com/nipunsadvilkar/pySBD) is recommended.
`pip install transformers accelerate sentencepiece pysbd`
```python
import pysbd
seg_en = pysbd.Segmenter(language="en", clean=False)
txt = 'Hello. I am an AI. How are you doing?'
print(translator(seg_en.segment(txt)))
```
This idea is from [FuguMT](https://huggingface.co/staka/fugumt-en-ja) repo.
## Training Data
We heavily referred [FuguMT author's blog post](https://staka.jp/wordpress/?p=413) for dataset collection.
- [Mitsua/wikidata-parallel-descriptions-en-ja](https://huggingface.co/datasets/Mitsua/wikidata-parallel-descriptions-en-ja) (CC0 1.0)
- We newly built this 1.5M lines wikidata parallel corpus to augment the training data. This greatly improved the vocabulary on a word basis.
- [The Kyoto Free Translation Task (KFTT)](https://www.phontron.com/kftt/) (CC BY-SA 3.0)
- Graham Neubig, "The Kyoto Free Translation Task," http://www.phontron.com/kftt, 2011.
- [Tatoeba](https://tatoeba.org/en/downloads) (CC BY 2.0 FR / CC0 1.0)
- https://tatoeba.org/
- [wikipedia-interlanguage-titles](https://github.com/bhaddow/wikipedia-interlanguage-titles) (The MIT License / CC BY-SA 4.0)
- We built parallel titles based on 2024-05-06 wikipedia dump.
- [WikiMatrix](https://github.com/facebookresearch/LASER/tree/main/tasks/WikiMatrix) (CC BY-SA 4.0)
- Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong and Francisco Guzmán, "WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia"
- [MDN Web Docs](https://github.com/mdn/translated-content) (The MIT / CC0 1.0 / CC BY-SA 2.5)
- https://github.com/mdn/translated-content
- [Wikimedia contenttranslation dump](https://dumps.wikimedia.org/other/contenttranslation/) (CC BY-SA 4.0)
- 2024-5-10 dump is used.
*Even if the dataset itself is CC-licensed, we did not use it if the corpus contained in the dataset is based on web crawling, is based on unauthorized use of copyrighted works, or is based on the machine translation output of other translation models.
## Training Procedure
We heavily referred "[Beating Edinburgh's WMT2017 system for en-de with Marian's Transformer model](https://github.com/marian-nmt/marian-examples/tree/master/wmt2017-transformer)"
for training process and hyperparameter tuning.
1. Trains a sentencepiece tokenizer 32k vocab on 4M lines openly licensed corpus.
2. Trains `ja-en` back-translation model on 4M lines openly licensed corpus for 6 epochs. = **ElanMT-base-ja-en**
3. Trains `en-ja` base translation model on 4M lines openly licensed corpus for 6 epochs. = **ElanMT-base-en-ja**
4. Translates 20M lines `ja` Wikipedia to `en` using back-translation model.
5. Trains 4 `en-ja` models, which is finetuned from **ElanMT-base-en-ja** checkpoint, on 24M lines training data augmented with back-translated data for 6 epochs.
6. Merges 4 trained models that produces the best validation score on FLORES+ dev split.
7. Finetunes the merged model on 1M lines high quality corpus subset for 5 epochs.
## Evaluation
### Dataset
- [FLORES+](https://github.com/openlanguagedata/flores) (CC BY-SA 4.0) devtest split is used for evaluation.
- [NTREX](https://github.com/MicrosoftTranslator/NTREX) (CC BY-SA 4.0)
### Result
| **Model** | **Params** | **FLORES+ BLEU** | **FLORES+ chrf** | **NTREX BLEU** | **NTREX chrf** |
|:---|---:|---:|---:|---:|---:|
| [**ElanMT-BT**](https://huggingface.co/Mitsua/elan-mt-bt-en-ja) | 61M | 29.96 | **38.43** | **25.63** | **35.41**|
| [**ElanMT-base**](https://huggingface.co/Mitsua/elan-mt-base-en-ja) **w/o back-translation** | 61M | 26.55 | 35.28 | 23.04 | 32.94|
| [**ElanMT-tiny**](https://huggingface.co/Mitsua/elan-mt-tiny-en-ja) | 15M | 25.93 | 34.69 | 22.78 | 33.00|
| [staka/fugumt-en-ja](https://huggingface.co/staka/fugumt-en-ja) (*1) | 61M | **30.89** | 38.38 | 24.74 | 34.23|
| [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) | 610M | 26.31 | 34.37 | 23.35 | 32.66|
| [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M) | 615M | 17.09 | 27.32 | 14.92 | 26.26|
| [facebook/nllb-200-3.3B](https://huggingface.co/facebook/nllb-200-3.3B) | 3B | 20.04 | 30.33 | 17.07 | 28.46|
| [google/madlad400-3b-mt](https://huggingface.co/google/madlad400-3b-mt) | 3B | 24.62 | 33.89 | 23.64 | 33.48|
| [google/madlad400-7b-mt](https://huggingface.co/google/madlad400-7b-mt) | 7B | 25.57 | 34.59 | 24.60 | 34.43|
- *1 tested on `transformers==4.29.2` and `num_beams=4`
- *2 BLEU score is calculated by `sacreBLEU` with `tokenize=ja-mecab`
## Disclaimer
- The translated result may be very incorrect, harmful or biased. The model was developed to investigate achievable performance with only a relatively small, licensed corpus, and is not suitable for use cases requiring high translation accuracy. Under Section 5 of the CC BY-SA 4.0 License, ELAN MITSUA Project / Abstract Engine is not responsible for any direct or indirect loss caused by the use of the model.
- 免責事項:翻訳結果は不正確で、有害であったりバイアスがかかっている可能性があります。本モデルは比較的小規模でライセンスされたコーパスのみで達成可能な性能を調査するために開発されたモデルであり、翻訳の正確性が必要なユースケースでの使用には適していません。絵藍ミツアプロジェクト及び株式会社アブストラクトエンジンはCC BY-SA 4.0ライセンス第5条に基づき、本モデルの使用によって生じた直接的または間接的な損失に対して、一切の責任を負いません。 |
ruidanwang/minima | ruidanwang | 2024-05-20T01:50:46Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-20T01:50:46Z | ---
license: apache-2.0
---
|
RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf | RichardErkhov | 2024-05-20T01:49:02Z | 10 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-05-20T00:03:39Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Synatra-RP-Orca-2-7b-v0.1 - GGUF
- Model creator: https://huggingface.co/maywell/
- Original model: https://huggingface.co/maywell/Synatra-RP-Orca-2-7b-v0.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Synatra-RP-Orca-2-7b-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.Q2_K.gguf) | Q2_K | 2.36GB |
| [Synatra-RP-Orca-2-7b-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [Synatra-RP-Orca-2-7b-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [Synatra-RP-Orca-2-7b-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [Synatra-RP-Orca-2-7b-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [Synatra-RP-Orca-2-7b-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.Q3_K.gguf) | Q3_K | 3.07GB |
| [Synatra-RP-Orca-2-7b-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [Synatra-RP-Orca-2-7b-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [Synatra-RP-Orca-2-7b-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [Synatra-RP-Orca-2-7b-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.Q4_0.gguf) | Q4_0 | 3.56GB |
| [Synatra-RP-Orca-2-7b-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [Synatra-RP-Orca-2-7b-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [Synatra-RP-Orca-2-7b-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.Q4_K.gguf) | Q4_K | 3.8GB |
| [Synatra-RP-Orca-2-7b-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [Synatra-RP-Orca-2-7b-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.Q4_1.gguf) | Q4_1 | 3.95GB |
| [Synatra-RP-Orca-2-7b-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.Q5_0.gguf) | Q5_0 | 4.33GB |
| [Synatra-RP-Orca-2-7b-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [Synatra-RP-Orca-2-7b-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.Q5_K.gguf) | Q5_K | 4.45GB |
| [Synatra-RP-Orca-2-7b-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [Synatra-RP-Orca-2-7b-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.Q5_1.gguf) | Q5_1 | 4.72GB |
| [Synatra-RP-Orca-2-7b-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.Q6_K.gguf) | Q6_K | 5.15GB |
| [Synatra-RP-Orca-2-7b-v0.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.Q8_0.gguf) | Q8_0 | 6.67GB |
Original model description:
---
license: apache-2.0
---
# **Synatra-RP-Orca-2-7b-v0.1🐧**
## Support Me
Synatra is a personal project and is being developed with one person's resources. If you like the model, how about a little research funding?
[<img src="https://cdn.buymeacoffee.com/buttons/default-orange.png" alt="Buy me a Coffee" width="217" height="50">](https://www.buymeacoffee.com/mwell)
Wanna be a sponser? (Please) Contact me on Telegram **AlzarTakkarsen**
# **Model Details**
**Base Model**
microsoft/Orca-2-7b
**Model Description**
It's a test RP sft model. Finetuned from microsoft/Orca-2-7b.
**Trained On**
A100 80GB * 1
**Instruction format**
Alpaca(Better), ChatML
|
vr4sigma/gptchatbot | vr4sigma | 2024-05-20T01:43:13Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"en",
"dataset:HuggingFaceFW/fineweb",
"dataset:PleIAs/YouTube-Commons",
"arxiv:1910.09700",
"license:wtfpl",
"region:us"
] | null | 2024-05-20T01:39:27Z | ---
license: wtfpl
datasets:
- HuggingFaceFW/fineweb
- PleIAs/YouTube-Commons
language:
- en
metrics:
- accuracy
library_name: adapter-transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ValterVar1/saiga-7b-lora-ner | ValterVar1 | 2024-05-20T01:37:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T01:08:01Z | ---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
mradermacher/Gaston_Yi-1.5-9B-Chat-GGUF | mradermacher | 2024-05-20T01:37:40Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"orpo",
"en",
"base_model:baconnier/Gaston_Yi-1.5-9B-Chat",
"base_model:quantized:baconnier/Gaston_Yi-1.5-9B-Chat",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-20T00:21:27Z | ---
base_model: baconnier/Gaston_Yi-1.5-9B-Chat
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- orpo
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/baconnier/Gaston_Yi-1.5-9B-Chat
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Gaston_Yi-1.5-9B-Chat-GGUF/resolve/main/Gaston_Yi-1.5-9B-Chat.Q2_K.gguf) | Q2_K | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Gaston_Yi-1.5-9B-Chat-GGUF/resolve/main/Gaston_Yi-1.5-9B-Chat.IQ3_XS.gguf) | IQ3_XS | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Gaston_Yi-1.5-9B-Chat-GGUF/resolve/main/Gaston_Yi-1.5-9B-Chat.Q3_K_S.gguf) | Q3_K_S | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Gaston_Yi-1.5-9B-Chat-GGUF/resolve/main/Gaston_Yi-1.5-9B-Chat.IQ3_S.gguf) | IQ3_S | 4.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Gaston_Yi-1.5-9B-Chat-GGUF/resolve/main/Gaston_Yi-1.5-9B-Chat.IQ3_M.gguf) | IQ3_M | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Gaston_Yi-1.5-9B-Chat-GGUF/resolve/main/Gaston_Yi-1.5-9B-Chat.Q3_K_M.gguf) | Q3_K_M | 4.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Gaston_Yi-1.5-9B-Chat-GGUF/resolve/main/Gaston_Yi-1.5-9B-Chat.Q3_K_L.gguf) | Q3_K_L | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Gaston_Yi-1.5-9B-Chat-GGUF/resolve/main/Gaston_Yi-1.5-9B-Chat.IQ4_XS.gguf) | IQ4_XS | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gaston_Yi-1.5-9B-Chat-GGUF/resolve/main/Gaston_Yi-1.5-9B-Chat.Q4_K_S.gguf) | Q4_K_S | 5.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gaston_Yi-1.5-9B-Chat-GGUF/resolve/main/Gaston_Yi-1.5-9B-Chat.Q4_K_M.gguf) | Q4_K_M | 5.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gaston_Yi-1.5-9B-Chat-GGUF/resolve/main/Gaston_Yi-1.5-9B-Chat.Q5_K_S.gguf) | Q5_K_S | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/Gaston_Yi-1.5-9B-Chat-GGUF/resolve/main/Gaston_Yi-1.5-9B-Chat.Q5_K_M.gguf) | Q5_K_M | 6.4 | |
| [GGUF](https://huggingface.co/mradermacher/Gaston_Yi-1.5-9B-Chat-GGUF/resolve/main/Gaston_Yi-1.5-9B-Chat.Q6_K.gguf) | Q6_K | 7.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Gaston_Yi-1.5-9B-Chat-GGUF/resolve/main/Gaston_Yi-1.5-9B-Chat.Q8_0.gguf) | Q8_0 | 9.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Gaston_Yi-1.5-9B-Chat-GGUF/resolve/main/Gaston_Yi-1.5-9B-Chat.f16.gguf) | f16 | 17.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
PQlet/textual-inversion-v2-ablation-vec5-img9 | PQlet | 2024-05-20T01:31:44Z | 17 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-05-20T01:31:42Z | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
base_model: runwayml/stable-diffusion-v1-5
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Textual Inversion training - PQlet/textual-inversion-v2-ablation-vec5-img9
The generated images are below.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
JosineyJr/generate-conventional-commit-messages | JosineyJr | 2024-05-20T01:31:35Z | 0 | 0 | unsloth | [
"unsloth",
"safetensors",
"code",
"text2text-generation",
"en",
"base_model:meta-llama/Meta-Llama-Guard-2-8B",
"base_model:finetune:meta-llama/Meta-Llama-Guard-2-8B",
"license:apache-2.0",
"region:us"
] | text2text-generation | 2024-05-20T01:15:15Z | ---
license: apache-2.0
language:
- en
library_name: unsloth
tags:
- code
pipeline_tag: text2text-generation
base_model: meta-llama/Meta-Llama-Guard-2-8B
---
# About the project
CommitWizard is a project that uses pre-trained language models to help automate the generation of commit messages based on code changes. It employs 4-bit quantization to optimize memory usage while maintaining model efficiency and accuracy. |
TinyPixel/openelm-ct | TinyPixel | 2024-05-20T01:29:49Z | 135 | 0 | transformers | [
"transformers",
"safetensors",
"openelm",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-05-20T01:29:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tsavage68/MedQA_L3_250steps_1e6rate_01beat_CSFTDPO | tsavage68 | 2024-05-20T01:23:51Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T01:19:35Z | ---
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: MedQA_L3_250steps_1e6rate_01beat_CSFTDPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MedQA_L3_250steps_1e6rate_01beat_CSFTDPO
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4710
- Rewards/chosen: -0.7540
- Rewards/rejected: -1.6509
- Rewards/accuracies: 0.7758
- Rewards/margins: 0.8969
- Logps/rejected: -37.8254
- Logps/chosen: -25.7624
- Logits/rejected: -1.1604
- Logits/chosen: -1.1585
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 250
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.695 | 0.0489 | 50 | 0.6713 | 0.0342 | -0.0142 | 0.6615 | 0.0484 | -21.4583 | -17.8807 | -0.9400 | -0.9395 |
| 0.6187 | 0.0977 | 100 | 0.5915 | -0.1174 | -0.4200 | 0.7121 | 0.3027 | -25.5168 | -19.3963 | -1.0412 | -1.0403 |
| 0.559 | 0.1466 | 150 | 0.5116 | -0.4993 | -1.1517 | 0.7429 | 0.6524 | -32.8335 | -23.2153 | -1.1115 | -1.1101 |
| 0.4654 | 0.1954 | 200 | 0.4732 | -0.7696 | -1.6630 | 0.7780 | 0.8934 | -37.9465 | -25.9187 | -1.1618 | -1.1598 |
| 0.4766 | 0.2443 | 250 | 0.4710 | -0.7540 | -1.6509 | 0.7758 | 0.8969 | -37.8254 | -25.7624 | -1.1604 | -1.1585 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.0.0+cu117
- Datasets 2.19.1
- Tokenizers 0.19.1
|
DownwardSpiral33/gpt2-imdb-pos-roberta128_1_0-2024.05.19.23.06 | DownwardSpiral33 | 2024-05-20T01:22:09Z | 135 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T01:21:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hiba2/results_t5_wiki | hiba2 | 2024-05-20T01:21:52Z | 109 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:ahmeddbahaa/t5-arabic-base-finetuned-wikilingua-ar",
"base_model:finetune:ahmeddbahaa/t5-arabic-base-finetuned-wikilingua-ar",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-20T01:21:20Z | ---
license: apache-2.0
base_model: ahmeddbahaa/t5-arabic-base-finetuned-wikilingua-ar
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: results_t5_wiki
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results_t5_wiki
This model is a fine-tuned version of [ahmeddbahaa/t5-arabic-base-finetuned-wikilingua-ar](https://huggingface.co/ahmeddbahaa/t5-arabic-base-finetuned-wikilingua-ar) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0002
- Rouge1: 0.1188
- Rouge2: 0.0194
- Rougel: 0.1188
- Rougelsum: 0.1186
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:------:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.8768 | 0.2143 | 500 | 0.0228 | 0.1148 | 0.0128 | 0.1148 | 0.1147 | 19.0 |
| 0.0437 | 0.4286 | 1000 | 0.0111 | 0.1164 | 0.0154 | 0.1168 | 0.1165 | 19.0 |
| 0.0436 | 0.6429 | 1500 | 0.0060 | 0.1168 | 0.0163 | 0.1171 | 0.1169 | 19.0 |
| 0.0212 | 0.8573 | 2000 | 0.0052 | 0.117 | 0.0165 | 0.1173 | 0.117 | 19.0 |
| 0.0161 | 1.0716 | 2500 | 0.0018 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 |
| 0.011 | 1.2859 | 3000 | 0.0018 | 0.1188 | 0.0193 | 0.1188 | 0.1186 | 19.0 |
| 0.0094 | 1.5002 | 3500 | 0.0014 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 |
| 0.0107 | 1.7145 | 4000 | 0.0007 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 |
| 0.0069 | 1.9288 | 4500 | 0.0006 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 |
| 0.007 | 2.1432 | 5000 | 0.0006 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 |
| 0.0064 | 2.3575 | 5500 | 0.0006 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 |
| 0.0062 | 2.5718 | 6000 | 0.0015 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 |
| 0.0042 | 2.7861 | 6500 | 0.0005 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 |
| 0.0043 | 3.0004 | 7000 | 0.0004 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 |
| 0.0042 | 3.2147 | 7500 | 0.0012 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 |
| 0.0047 | 3.4291 | 8000 | 0.0010 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 |
| 0.0043 | 3.6434 | 8500 | 0.0008 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 |
| 0.0024 | 3.8577 | 9000 | 0.0003 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 |
| 0.0026 | 4.0720 | 9500 | 0.0005 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 |
| 0.0029 | 4.2863 | 10000 | 0.0003 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 |
| 0.0045 | 4.5006 | 10500 | 0.0006 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 |
| 0.0024 | 4.7150 | 11000 | 0.0001 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 |
| 0.0018 | 4.9293 | 11500 | 0.0002 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 |
| 0.002 | 5.1436 | 12000 | 0.0002 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 |
| 0.0022 | 5.3579 | 12500 | 0.0001 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 |
| 0.0017 | 5.5722 | 13000 | 0.0003 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 |
| 0.0014 | 5.7865 | 13500 | 0.0005 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 |
| 0.0055 | 6.0009 | 14000 | 0.0012 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 16.3147 |
| 0.0127 | 6.2152 | 14500 | 0.0002 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 |
| 0.0012 | 6.4295 | 15000 | 0.0002 | 0.1188 | 0.0194 | 0.1188 | 0.1186 | 19.0 |
### Framework versions
- Transformers 4.42.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
jikaixuan/zephyr-ds | jikaixuan | 2024-05-20T01:21:41Z | 196 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:alignment-handbook/zephyr-7b-sft-full",
"base_model:finetune:alignment-handbook/zephyr-7b-sft-full",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-10T03:49:51Z | ---
license: apache-2.0
base_model: alignment-handbook/zephyr-7b-sft-full
tags:
- generated_from_trainer
model-index:
- name: zephyr-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-ds
This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3439
- Rewards/chosen: -1.1633
- Rewards/rejected: -3.5290
- Rewards/accuracies: 0.7420
- Rewards/margins: 2.3657
- Logps/rejected: -294.5901
- Logps/chosen: -295.8908
- Logits/rejected: -2.7390
- Logits/chosen: -2.7421
- Use Label: 9180.7998
- Pred Label: 6851.2002
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Use Label | Pred Label |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|:---------:|:----------:|
| 0.333 | 1.0 | 955 | 0.3439 | -1.1633 | -3.5290 | 0.7420 | 2.3657 | -294.5901 | -295.8908 | -2.7390 | -2.7421 | 8950.7998 | 6581.2002 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.1+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
RichardErkhov/KnutJaegersberg_-_deacon-13b-gguf | RichardErkhov | 2024-05-20T01:17:30Z | 12 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-05-19T22:42:12Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
deacon-13b - GGUF
- Model creator: https://huggingface.co/KnutJaegersberg/
- Original model: https://huggingface.co/KnutJaegersberg/deacon-13b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [deacon-13b.Q2_K.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_deacon-13b-gguf/blob/main/deacon-13b.Q2_K.gguf) | Q2_K | 4.52GB |
| [deacon-13b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_deacon-13b-gguf/blob/main/deacon-13b.IQ3_XS.gguf) | IQ3_XS | 4.99GB |
| [deacon-13b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_deacon-13b-gguf/blob/main/deacon-13b.IQ3_S.gguf) | IQ3_S | 5.27GB |
| [deacon-13b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_deacon-13b-gguf/blob/main/deacon-13b.Q3_K_S.gguf) | Q3_K_S | 5.27GB |
| [deacon-13b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_deacon-13b-gguf/blob/main/deacon-13b.IQ3_M.gguf) | IQ3_M | 5.57GB |
| [deacon-13b.Q3_K.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_deacon-13b-gguf/blob/main/deacon-13b.Q3_K.gguf) | Q3_K | 5.9GB |
| [deacon-13b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_deacon-13b-gguf/blob/main/deacon-13b.Q3_K_M.gguf) | Q3_K_M | 5.9GB |
| [deacon-13b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_deacon-13b-gguf/blob/main/deacon-13b.Q3_K_L.gguf) | Q3_K_L | 6.45GB |
| [deacon-13b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_deacon-13b-gguf/blob/main/deacon-13b.IQ4_XS.gguf) | IQ4_XS | 6.54GB |
| [deacon-13b.Q4_0.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_deacon-13b-gguf/blob/main/deacon-13b.Q4_0.gguf) | Q4_0 | 6.86GB |
| [deacon-13b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_deacon-13b-gguf/blob/main/deacon-13b.IQ4_NL.gguf) | IQ4_NL | 6.9GB |
| [deacon-13b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_deacon-13b-gguf/blob/main/deacon-13b.Q4_K_S.gguf) | Q4_K_S | 6.91GB |
| [deacon-13b.Q4_K.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_deacon-13b-gguf/blob/main/deacon-13b.Q4_K.gguf) | Q4_K | 7.33GB |
| [deacon-13b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_deacon-13b-gguf/blob/main/deacon-13b.Q4_K_M.gguf) | Q4_K_M | 7.33GB |
| [deacon-13b.Q4_1.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_deacon-13b-gguf/blob/main/deacon-13b.Q4_1.gguf) | Q4_1 | 7.61GB |
| [deacon-13b.Q5_0.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_deacon-13b-gguf/blob/main/deacon-13b.Q5_0.gguf) | Q5_0 | 8.36GB |
| [deacon-13b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_deacon-13b-gguf/blob/main/deacon-13b.Q5_K_S.gguf) | Q5_K_S | 8.36GB |
| [deacon-13b.Q5_K.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_deacon-13b-gguf/blob/main/deacon-13b.Q5_K.gguf) | Q5_K | 8.6GB |
| [deacon-13b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_deacon-13b-gguf/blob/main/deacon-13b.Q5_K_M.gguf) | Q5_K_M | 8.6GB |
| [deacon-13b.Q5_1.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_deacon-13b-gguf/blob/main/deacon-13b.Q5_1.gguf) | Q5_1 | 9.1GB |
| [deacon-13b.Q6_K.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_deacon-13b-gguf/blob/main/deacon-13b.Q6_K.gguf) | Q6_K | 9.95GB |
| [deacon-13b.Q8_0.gguf](https://huggingface.co/RichardErkhov/KnutJaegersberg_-_deacon-13b-gguf/blob/main/deacon-13b.Q8_0.gguf) | Q8_0 | 12.88GB |
Original model description:
---
license: cc-by-nc-4.0
datasets:
- KnutJaegersberg/facehugger
---

This model was fine tuned on AI filtered subsets of GPT-4 based subset of the Dolphin dataset and EvolInstruct V2.
It has not been explicitly aligned to positive, negative or bureaucratically prescribed value systems.
It might kill us all! Time to shit your pants, regulators. I literally put black goo on Dolphin-7B sperm, which then fertilized Evolved Instructions...
What's different is evil... ;)
I intend to train 3 sizes.
Prompt Example:
```
### System:
You are an AI assistant. User will give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps.
### Instruction:
How do you fine tune a large language model?
### Response:
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KnutJaegersberg__deacon-13b)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 46.78 |
| ARC (25-shot) | 57.85 |
| HellaSwag (10-shot) | 82.63 |
| MMLU (5-shot) | 55.25 |
| TruthfulQA (0-shot) | 39.33 |
| Winogrande (5-shot) | 76.32 |
| GSM8K (5-shot) | 10.39 |
| DROP (3-shot) | 5.67 |
|
crisistransformers/CT-M3-Complete | crisistransformers | 2024-05-20T01:17:20Z | 366 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"arxiv:2403.16614",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-10T04:05:32Z | # CrisisTransformers
CrisisTransformers is a family of pre-trained language models and sentence encoders introduced in the papers "[CrisisTransformers: Pre-trained language models and sentence encoders for crisis-related social media texts](https://www.sciencedirect.com/science/article/pii/S0950705124005501)" and "[Semantically Enriched Cross-Lingual Sentence Embeddings for Crisis-related Social Media Texts](https://arxiv.org/abs/2403.16614)". The models were trained based on the RoBERTa pre-training procedure on a massive corpus of over 15 billion word tokens sourced from tweets associated with 30+ crisis events such as disease outbreaks, natural disasters, conflicts, etc. Please refer to the [associated paper](https://www.sciencedirect.com/science/article/pii/S0950705124005501) for more details.
CrisisTransformers were evaluated on 18 public crisis-specific datasets against strong baselines. Our pre-trained models outperform the baselines across all 18 datasets in classification tasks, and our best-performing sentence-encoder (mono-lingual) outperforms the state-of-the-art by more than 17\% in sentence encoding tasks. The multi-lingual sentence encoders (support 50+ languages; see [associated paper](https://arxiv.org/abs/2403.16614)) are designed to approximate the embedding space of the best-performing mono-lingual sentence encoder.
## Uses
CrisisTransformers has 8 pre-trained models, 1 mono-lingual and 2 multi-lingual sentence encoders. The pre-trained models should be finetuned for downstream tasks just like [BERT](https://huggingface.co/bert-base-cased) and [RoBERTa](https://huggingface.co/roberta-base). The sentence encoders can be used out-of-the-box just like [Sentence-Transformers](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) for sentence encoding to facilitate tasks such as semantic search, clustering, topic modelling.
## Models and naming conventions
*CT-M1* models were trained from scratch up to 40 epochs, while *CT-M2* models were initialized with pre-trained RoBERTa's weights and *CT-M3* models were initialized with pre-trained BERTweet's weights and both trained for up to 20 epochs. *OneLook* represents the checkpoint after 1 epoch, *BestLoss* represents the checkpoint with the lowest loss during training, and *Complete* represents the checkpoint after completing all epochs. *SE* represents sentence encoder.
| pre-trained model | source |
|--|--|
|CT-M1-BestLoss|[crisistransformers/CT-M1-BestLoss](https://huggingface.co/crisistransformers/CT-M1-BestLoss)|
|CT-M1-Complete|[crisistransformers/CT-M1-Complete](https://huggingface.co/crisistransformers/CT-M1-Complete)|
|CT-M2-OneLook|[crisistransformers/CT-M2-OneLook](https://huggingface.co/crisistransformers/CT-M2-OneLook)|
|CT-M2-BestLoss|[crisistransformers/CT-M2-BestLoss](https://huggingface.co/crisistransformers/CT-M2-BestLoss)|
|CT-M2-Complete|[crisistransformers/CT-M2-Complete](https://huggingface.co/crisistransformers/CT-M2-Complete)|
|CT-M3-OneLook|[crisistransformers/CT-M3-OneLook](https://huggingface.co/crisistransformers/CT-M3-OneLook)|
|CT-M3-BestLoss|[crisistransformers/CT-M3-BestLoss](https://huggingface.co/crisistransformers/CT-M3-BestLoss)|
|CT-M3-Complete|[crisistransformers/CT-M3-Complete](https://huggingface.co/crisistransformers/CT-M3-Complete)|
| sentence encoder | source |
|--|--|
|CT-M1-Complete-SE (mono-lingual: EN)|[crisistransformers/CT-M1-Complete-SE](https://huggingface.co/crisistransformers/CT-M1-Complete-SE)|
|CT-XLMR-SE (multi-lingual)|[crisistransformers/CT-XLMR-SE](https://huggingface.co/crisistransformers/CT-XLMR-SE)|
|CT-mBERT-SE (multi-lingual)|[crisistransformers/CT-mBERT-SE](https://huggingface.co/crisistransformers/CT-mBERT-SE)|
Languages supported by the multi-lingual sentence encoders: Albanian, Arabic, Armenian, Bulgarian, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, Estonian, Finnish, French, French (Canada), Galician, Georgian, German, Greek, Gujarati, Hebrew, Hindi, Hungarian, Indonesian, Italian, Japanese, Korean, Kurdish (Sorani), Latvian, Lithuanian, Macedonian, Malay, Marathi, Mongolian, Myanmar (Burmese), Norwegian, Persian, Polish, Portuguese, Portuguese (Brazil), Romanian, Russian, Serbian, Slovak, Slovenian, Spanish, Swedish, Thai, Turkish, Ukrainian, Urdu, and Vietnamese.
## Citation
If you use CrisisTransformers and the mono-lingual sentence encoder, please cite the following paper:
```
@article{lamsal2023crisistransformers,
title={CrisisTransformers: Pre-trained language models and sentence encoders for crisis-related social media texts},
author={Rabindra Lamsal and
Maria Rodriguez Read and
Shanika Karunasekera},
journal={Knowledge-Based Systems},
pages={111916},
year={2024},
publisher={Elsevier}
}
```
If you use the multi-lingual sentence encoders, please cite the following paper:
```
@article{lamsal2024semantically,
title={Semantically Enriched Cross-Lingual Sentence Embeddings for Crisis-related Social Media Texts},
author={Rabindra Lamsal and
Maria Rodriguez Read and
Shanika Karunasekera},
year={2024},
eprint={2403.16614},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
crisistransformers/CT-M3-BestLoss | crisistransformers | 2024-05-20T01:17:14Z | 162 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"arxiv:2403.16614",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-21T07:27:53Z | # CrisisTransformers
CrisisTransformers is a family of pre-trained language models and sentence encoders introduced in the papers "[CrisisTransformers: Pre-trained language models and sentence encoders for crisis-related social media texts](https://www.sciencedirect.com/science/article/pii/S0950705124005501)" and "[Semantically Enriched Cross-Lingual Sentence Embeddings for Crisis-related Social Media Texts](https://arxiv.org/abs/2403.16614)". The models were trained based on the RoBERTa pre-training procedure on a massive corpus of over 15 billion word tokens sourced from tweets associated with 30+ crisis events such as disease outbreaks, natural disasters, conflicts, etc. Please refer to the [associated paper](https://www.sciencedirect.com/science/article/pii/S0950705124005501) for more details.
CrisisTransformers were evaluated on 18 public crisis-specific datasets against strong baselines. Our pre-trained models outperform the baselines across all 18 datasets in classification tasks, and our best-performing sentence-encoder (mono-lingual) outperforms the state-of-the-art by more than 17\% in sentence encoding tasks. The multi-lingual sentence encoders (support 50+ languages; see [associated paper](https://arxiv.org/abs/2403.16614)) are designed to approximate the embedding space of the best-performing mono-lingual sentence encoder.
## Uses
CrisisTransformers has 8 pre-trained models, 1 mono-lingual and 2 multi-lingual sentence encoders. The pre-trained models should be finetuned for downstream tasks just like [BERT](https://huggingface.co/bert-base-cased) and [RoBERTa](https://huggingface.co/roberta-base). The sentence encoders can be used out-of-the-box just like [Sentence-Transformers](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) for sentence encoding to facilitate tasks such as semantic search, clustering, topic modelling.
## Models and naming conventions
*CT-M1* models were trained from scratch up to 40 epochs, while *CT-M2* models were initialized with pre-trained RoBERTa's weights and *CT-M3* models were initialized with pre-trained BERTweet's weights and both trained for up to 20 epochs. *OneLook* represents the checkpoint after 1 epoch, *BestLoss* represents the checkpoint with the lowest loss during training, and *Complete* represents the checkpoint after completing all epochs. *SE* represents sentence encoder.
| pre-trained model | source |
|--|--|
|CT-M1-BestLoss|[crisistransformers/CT-M1-BestLoss](https://huggingface.co/crisistransformers/CT-M1-BestLoss)|
|CT-M1-Complete|[crisistransformers/CT-M1-Complete](https://huggingface.co/crisistransformers/CT-M1-Complete)|
|CT-M2-OneLook|[crisistransformers/CT-M2-OneLook](https://huggingface.co/crisistransformers/CT-M2-OneLook)|
|CT-M2-BestLoss|[crisistransformers/CT-M2-BestLoss](https://huggingface.co/crisistransformers/CT-M2-BestLoss)|
|CT-M2-Complete|[crisistransformers/CT-M2-Complete](https://huggingface.co/crisistransformers/CT-M2-Complete)|
|CT-M3-OneLook|[crisistransformers/CT-M3-OneLook](https://huggingface.co/crisistransformers/CT-M3-OneLook)|
|CT-M3-BestLoss|[crisistransformers/CT-M3-BestLoss](https://huggingface.co/crisistransformers/CT-M3-BestLoss)|
|CT-M3-Complete|[crisistransformers/CT-M3-Complete](https://huggingface.co/crisistransformers/CT-M3-Complete)|
| sentence encoder | source |
|--|--|
|CT-M1-Complete-SE (mono-lingual: EN)|[crisistransformers/CT-M1-Complete-SE](https://huggingface.co/crisistransformers/CT-M1-Complete-SE)|
|CT-XLMR-SE (multi-lingual)|[crisistransformers/CT-XLMR-SE](https://huggingface.co/crisistransformers/CT-XLMR-SE)|
|CT-mBERT-SE (multi-lingual)|[crisistransformers/CT-mBERT-SE](https://huggingface.co/crisistransformers/CT-mBERT-SE)|
Languages supported by the multi-lingual sentence encoders: Albanian, Arabic, Armenian, Bulgarian, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, Estonian, Finnish, French, French (Canada), Galician, Georgian, German, Greek, Gujarati, Hebrew, Hindi, Hungarian, Indonesian, Italian, Japanese, Korean, Kurdish (Sorani), Latvian, Lithuanian, Macedonian, Malay, Marathi, Mongolian, Myanmar (Burmese), Norwegian, Persian, Polish, Portuguese, Portuguese (Brazil), Romanian, Russian, Serbian, Slovak, Slovenian, Spanish, Swedish, Thai, Turkish, Ukrainian, Urdu, and Vietnamese.
## Citation
If you use CrisisTransformers and the mono-lingual sentence encoder, please cite the following paper:
```
@article{lamsal2023crisistransformers,
title={CrisisTransformers: Pre-trained language models and sentence encoders for crisis-related social media texts},
author={Rabindra Lamsal and
Maria Rodriguez Read and
Shanika Karunasekera},
journal={Knowledge-Based Systems},
pages={111916},
year={2024},
publisher={Elsevier}
}
```
If you use the multi-lingual sentence encoders, please cite the following paper:
```
@article{lamsal2024semantically,
title={Semantically Enriched Cross-Lingual Sentence Embeddings for Crisis-related Social Media Texts},
author={Rabindra Lamsal and
Maria Rodriguez Read and
Shanika Karunasekera},
year={2024},
eprint={2403.16614},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
crisistransformers/CT-M3-OneLook | crisistransformers | 2024-05-20T01:17:07Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"arxiv:2403.16614",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-10T03:42:07Z | # CrisisTransformers
CrisisTransformers is a family of pre-trained language models and sentence encoders introduced in the papers "[CrisisTransformers: Pre-trained language models and sentence encoders for crisis-related social media texts](https://www.sciencedirect.com/science/article/pii/S0950705124005501)" and "[Semantically Enriched Cross-Lingual Sentence Embeddings for Crisis-related Social Media Texts](https://arxiv.org/abs/2403.16614)". The models were trained based on the RoBERTa pre-training procedure on a massive corpus of over 15 billion word tokens sourced from tweets associated with 30+ crisis events such as disease outbreaks, natural disasters, conflicts, etc. Please refer to the [associated paper](https://www.sciencedirect.com/science/article/pii/S0950705124005501) for more details.
CrisisTransformers were evaluated on 18 public crisis-specific datasets against strong baselines. Our pre-trained models outperform the baselines across all 18 datasets in classification tasks, and our best-performing sentence-encoder (mono-lingual) outperforms the state-of-the-art by more than 17\% in sentence encoding tasks. The multi-lingual sentence encoders (support 50+ languages; see [associated paper](https://arxiv.org/abs/2403.16614)) are designed to approximate the embedding space of the best-performing mono-lingual sentence encoder.
## Uses
CrisisTransformers has 8 pre-trained models, 1 mono-lingual and 2 multi-lingual sentence encoders. The pre-trained models should be finetuned for downstream tasks just like [BERT](https://huggingface.co/bert-base-cased) and [RoBERTa](https://huggingface.co/roberta-base). The sentence encoders can be used out-of-the-box just like [Sentence-Transformers](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) for sentence encoding to facilitate tasks such as semantic search, clustering, topic modelling.
## Models and naming conventions
*CT-M1* models were trained from scratch up to 40 epochs, while *CT-M2* models were initialized with pre-trained RoBERTa's weights and *CT-M3* models were initialized with pre-trained BERTweet's weights and both trained for up to 20 epochs. *OneLook* represents the checkpoint after 1 epoch, *BestLoss* represents the checkpoint with the lowest loss during training, and *Complete* represents the checkpoint after completing all epochs. *SE* represents sentence encoder.
| pre-trained model | source |
|--|--|
|CT-M1-BestLoss|[crisistransformers/CT-M1-BestLoss](https://huggingface.co/crisistransformers/CT-M1-BestLoss)|
|CT-M1-Complete|[crisistransformers/CT-M1-Complete](https://huggingface.co/crisistransformers/CT-M1-Complete)|
|CT-M2-OneLook|[crisistransformers/CT-M2-OneLook](https://huggingface.co/crisistransformers/CT-M2-OneLook)|
|CT-M2-BestLoss|[crisistransformers/CT-M2-BestLoss](https://huggingface.co/crisistransformers/CT-M2-BestLoss)|
|CT-M2-Complete|[crisistransformers/CT-M2-Complete](https://huggingface.co/crisistransformers/CT-M2-Complete)|
|CT-M3-OneLook|[crisistransformers/CT-M3-OneLook](https://huggingface.co/crisistransformers/CT-M3-OneLook)|
|CT-M3-BestLoss|[crisistransformers/CT-M3-BestLoss](https://huggingface.co/crisistransformers/CT-M3-BestLoss)|
|CT-M3-Complete|[crisistransformers/CT-M3-Complete](https://huggingface.co/crisistransformers/CT-M3-Complete)|
| sentence encoder | source |
|--|--|
|CT-M1-Complete-SE (mono-lingual: EN)|[crisistransformers/CT-M1-Complete-SE](https://huggingface.co/crisistransformers/CT-M1-Complete-SE)|
|CT-XLMR-SE (multi-lingual)|[crisistransformers/CT-XLMR-SE](https://huggingface.co/crisistransformers/CT-XLMR-SE)|
|CT-mBERT-SE (multi-lingual)|[crisistransformers/CT-mBERT-SE](https://huggingface.co/crisistransformers/CT-mBERT-SE)|
Languages supported by the multi-lingual sentence encoders: Albanian, Arabic, Armenian, Bulgarian, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, Estonian, Finnish, French, French (Canada), Galician, Georgian, German, Greek, Gujarati, Hebrew, Hindi, Hungarian, Indonesian, Italian, Japanese, Korean, Kurdish (Sorani), Latvian, Lithuanian, Macedonian, Malay, Marathi, Mongolian, Myanmar (Burmese), Norwegian, Persian, Polish, Portuguese, Portuguese (Brazil), Romanian, Russian, Serbian, Slovak, Slovenian, Spanish, Swedish, Thai, Turkish, Ukrainian, Urdu, and Vietnamese.
## Citation
If you use CrisisTransformers and the mono-lingual sentence encoder, please cite the following paper:
```
@article{lamsal2023crisistransformers,
title={CrisisTransformers: Pre-trained language models and sentence encoders for crisis-related social media texts},
author={Rabindra Lamsal and
Maria Rodriguez Read and
Shanika Karunasekera},
journal={Knowledge-Based Systems},
pages={111916},
year={2024},
publisher={Elsevier}
}
```
If you use the multi-lingual sentence encoders, please cite the following paper:
```
@article{lamsal2024semantically,
title={Semantically Enriched Cross-Lingual Sentence Embeddings for Crisis-related Social Media Texts},
author={Rabindra Lamsal and
Maria Rodriguez Read and
Shanika Karunasekera},
year={2024},
eprint={2403.16614},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
crisistransformers/CT-M2-Complete | crisistransformers | 2024-05-20T01:17:01Z | 243 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"arxiv:2403.16614",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-21T06:55:04Z | # CrisisTransformers
CrisisTransformers is a family of pre-trained language models and sentence encoders introduced in the papers "[CrisisTransformers: Pre-trained language models and sentence encoders for crisis-related social media texts](https://www.sciencedirect.com/science/article/pii/S0950705124005501)" and "[Semantically Enriched Cross-Lingual Sentence Embeddings for Crisis-related Social Media Texts](https://arxiv.org/abs/2403.16614)". The models were trained based on the RoBERTa pre-training procedure on a massive corpus of over 15 billion word tokens sourced from tweets associated with 30+ crisis events such as disease outbreaks, natural disasters, conflicts, etc. Please refer to the [associated paper](https://www.sciencedirect.com/science/article/pii/S0950705124005501) for more details.
CrisisTransformers were evaluated on 18 public crisis-specific datasets against strong baselines. Our pre-trained models outperform the baselines across all 18 datasets in classification tasks, and our best-performing sentence-encoder (mono-lingual) outperforms the state-of-the-art by more than 17\% in sentence encoding tasks. The multi-lingual sentence encoders (support 50+ languages; see [associated paper](https://arxiv.org/abs/2403.16614)) are designed to approximate the embedding space of the best-performing mono-lingual sentence encoder.
## Uses
CrisisTransformers has 8 pre-trained models, 1 mono-lingual and 2 multi-lingual sentence encoders. The pre-trained models should be finetuned for downstream tasks just like [BERT](https://huggingface.co/bert-base-cased) and [RoBERTa](https://huggingface.co/roberta-base). The sentence encoders can be used out-of-the-box just like [Sentence-Transformers](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) for sentence encoding to facilitate tasks such as semantic search, clustering, topic modelling.
## Models and naming conventions
*CT-M1* models were trained from scratch up to 40 epochs, while *CT-M2* models were initialized with pre-trained RoBERTa's weights and *CT-M3* models were initialized with pre-trained BERTweet's weights and both trained for up to 20 epochs. *OneLook* represents the checkpoint after 1 epoch, *BestLoss* represents the checkpoint with the lowest loss during training, and *Complete* represents the checkpoint after completing all epochs. *SE* represents sentence encoder.
| pre-trained model | source |
|--|--|
|CT-M1-BestLoss|[crisistransformers/CT-M1-BestLoss](https://huggingface.co/crisistransformers/CT-M1-BestLoss)|
|CT-M1-Complete|[crisistransformers/CT-M1-Complete](https://huggingface.co/crisistransformers/CT-M1-Complete)|
|CT-M2-OneLook|[crisistransformers/CT-M2-OneLook](https://huggingface.co/crisistransformers/CT-M2-OneLook)|
|CT-M2-BestLoss|[crisistransformers/CT-M2-BestLoss](https://huggingface.co/crisistransformers/CT-M2-BestLoss)|
|CT-M2-Complete|[crisistransformers/CT-M2-Complete](https://huggingface.co/crisistransformers/CT-M2-Complete)|
|CT-M3-OneLook|[crisistransformers/CT-M3-OneLook](https://huggingface.co/crisistransformers/CT-M3-OneLook)|
|CT-M3-BestLoss|[crisistransformers/CT-M3-BestLoss](https://huggingface.co/crisistransformers/CT-M3-BestLoss)|
|CT-M3-Complete|[crisistransformers/CT-M3-Complete](https://huggingface.co/crisistransformers/CT-M3-Complete)|
| sentence encoder | source |
|--|--|
|CT-M1-Complete-SE (mono-lingual: EN)|[crisistransformers/CT-M1-Complete-SE](https://huggingface.co/crisistransformers/CT-M1-Complete-SE)|
|CT-XLMR-SE (multi-lingual)|[crisistransformers/CT-XLMR-SE](https://huggingface.co/crisistransformers/CT-XLMR-SE)|
|CT-mBERT-SE (multi-lingual)|[crisistransformers/CT-mBERT-SE](https://huggingface.co/crisistransformers/CT-mBERT-SE)|
Languages supported by the multi-lingual sentence encoders: Albanian, Arabic, Armenian, Bulgarian, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, Estonian, Finnish, French, French (Canada), Galician, Georgian, German, Greek, Gujarati, Hebrew, Hindi, Hungarian, Indonesian, Italian, Japanese, Korean, Kurdish (Sorani), Latvian, Lithuanian, Macedonian, Malay, Marathi, Mongolian, Myanmar (Burmese), Norwegian, Persian, Polish, Portuguese, Portuguese (Brazil), Romanian, Russian, Serbian, Slovak, Slovenian, Spanish, Swedish, Thai, Turkish, Ukrainian, Urdu, and Vietnamese.
## Citation
If you use CrisisTransformers and the mono-lingual sentence encoder, please cite the following paper:
```
@article{lamsal2023crisistransformers,
title={CrisisTransformers: Pre-trained language models and sentence encoders for crisis-related social media texts},
author={Rabindra Lamsal and
Maria Rodriguez Read and
Shanika Karunasekera},
journal={Knowledge-Based Systems},
pages={111916},
year={2024},
publisher={Elsevier}
}
```
If you use the multi-lingual sentence encoders, please cite the following paper:
```
@article{lamsal2024semantically,
title={Semantically Enriched Cross-Lingual Sentence Embeddings for Crisis-related Social Media Texts},
author={Rabindra Lamsal and
Maria Rodriguez Read and
Shanika Karunasekera},
year={2024},
eprint={2403.16614},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
crisistransformers/CT-M2-OneLook | crisistransformers | 2024-05-20T01:16:48Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"arxiv:2403.16614",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-10T03:28:54Z | # CrisisTransformers
CrisisTransformers is a family of pre-trained language models and sentence encoders introduced in the papers "[CrisisTransformers: Pre-trained language models and sentence encoders for crisis-related social media texts](https://www.sciencedirect.com/science/article/pii/S0950705124005501)" and "[Semantically Enriched Cross-Lingual Sentence Embeddings for Crisis-related Social Media Texts](https://arxiv.org/abs/2403.16614)". The models were trained based on the RoBERTa pre-training procedure on a massive corpus of over 15 billion word tokens sourced from tweets associated with 30+ crisis events such as disease outbreaks, natural disasters, conflicts, etc. Please refer to the [associated paper](https://www.sciencedirect.com/science/article/pii/S0950705124005501) for more details.
CrisisTransformers were evaluated on 18 public crisis-specific datasets against strong baselines. Our pre-trained models outperform the baselines across all 18 datasets in classification tasks, and our best-performing sentence-encoder (mono-lingual) outperforms the state-of-the-art by more than 17\% in sentence encoding tasks. The multi-lingual sentence encoders (support 50+ languages; see [associated paper](https://arxiv.org/abs/2403.16614)) are designed to approximate the embedding space of the best-performing mono-lingual sentence encoder.
## Uses
CrisisTransformers has 8 pre-trained models, 1 mono-lingual and 2 multi-lingual sentence encoders. The pre-trained models should be finetuned for downstream tasks just like [BERT](https://huggingface.co/bert-base-cased) and [RoBERTa](https://huggingface.co/roberta-base). The sentence encoders can be used out-of-the-box just like [Sentence-Transformers](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) for sentence encoding to facilitate tasks such as semantic search, clustering, topic modelling.
## Models and naming conventions
*CT-M1* models were trained from scratch up to 40 epochs, while *CT-M2* models were initialized with pre-trained RoBERTa's weights and *CT-M3* models were initialized with pre-trained BERTweet's weights and both trained for up to 20 epochs. *OneLook* represents the checkpoint after 1 epoch, *BestLoss* represents the checkpoint with the lowest loss during training, and *Complete* represents the checkpoint after completing all epochs. *SE* represents sentence encoder.
| pre-trained model | source |
|--|--|
|CT-M1-BestLoss|[crisistransformers/CT-M1-BestLoss](https://huggingface.co/crisistransformers/CT-M1-BestLoss)|
|CT-M1-Complete|[crisistransformers/CT-M1-Complete](https://huggingface.co/crisistransformers/CT-M1-Complete)|
|CT-M2-OneLook|[crisistransformers/CT-M2-OneLook](https://huggingface.co/crisistransformers/CT-M2-OneLook)|
|CT-M2-BestLoss|[crisistransformers/CT-M2-BestLoss](https://huggingface.co/crisistransformers/CT-M2-BestLoss)|
|CT-M2-Complete|[crisistransformers/CT-M2-Complete](https://huggingface.co/crisistransformers/CT-M2-Complete)|
|CT-M3-OneLook|[crisistransformers/CT-M3-OneLook](https://huggingface.co/crisistransformers/CT-M3-OneLook)|
|CT-M3-BestLoss|[crisistransformers/CT-M3-BestLoss](https://huggingface.co/crisistransformers/CT-M3-BestLoss)|
|CT-M3-Complete|[crisistransformers/CT-M3-Complete](https://huggingface.co/crisistransformers/CT-M3-Complete)|
| sentence encoder | source |
|--|--|
|CT-M1-Complete-SE (mono-lingual: EN)|[crisistransformers/CT-M1-Complete-SE](https://huggingface.co/crisistransformers/CT-M1-Complete-SE)|
|CT-XLMR-SE (multi-lingual)|[crisistransformers/CT-XLMR-SE](https://huggingface.co/crisistransformers/CT-XLMR-SE)|
|CT-mBERT-SE (multi-lingual)|[crisistransformers/CT-mBERT-SE](https://huggingface.co/crisistransformers/CT-mBERT-SE)|
Languages supported by the multi-lingual sentence encoders: Albanian, Arabic, Armenian, Bulgarian, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, Estonian, Finnish, French, French (Canada), Galician, Georgian, German, Greek, Gujarati, Hebrew, Hindi, Hungarian, Indonesian, Italian, Japanese, Korean, Kurdish (Sorani), Latvian, Lithuanian, Macedonian, Malay, Marathi, Mongolian, Myanmar (Burmese), Norwegian, Persian, Polish, Portuguese, Portuguese (Brazil), Romanian, Russian, Serbian, Slovak, Slovenian, Spanish, Swedish, Thai, Turkish, Ukrainian, Urdu, and Vietnamese.
## Citation
If you use CrisisTransformers and the mono-lingual sentence encoder, please cite the following paper:
```
@article{lamsal2023crisistransformers,
title={CrisisTransformers: Pre-trained language models and sentence encoders for crisis-related social media texts},
author={Rabindra Lamsal and
Maria Rodriguez Read and
Shanika Karunasekera},
journal={Knowledge-Based Systems},
pages={111916},
year={2024},
publisher={Elsevier}
}
```
If you use the multi-lingual sentence encoders, please cite the following paper:
```
@article{lamsal2024semantically,
title={Semantically Enriched Cross-Lingual Sentence Embeddings for Crisis-related Social Media Texts},
author={Rabindra Lamsal and
Maria Rodriguez Read and
Shanika Karunasekera},
year={2024},
eprint={2403.16614},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
crisistransformers/CT-mBERT-SE | crisistransformers | 2024-05-20T01:15:53Z | 1,306 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:2403.16614",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-02-13T23:07:56Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# CrisisTransformers
CrisisTransformers is a family of pre-trained language models and sentence encoders introduced in the papers "[CrisisTransformers: Pre-trained language models and sentence encoders for crisis-related social media texts](https://www.sciencedirect.com/science/article/pii/S0950705124005501)" and "[Semantically Enriched Cross-Lingual Sentence Embeddings for Crisis-related Social Media Texts](https://arxiv.org/abs/2403.16614)". The models were trained based on the RoBERTa pre-training procedure on a massive corpus of over 15 billion word tokens sourced from tweets associated with 30+ crisis events such as disease outbreaks, natural disasters, conflicts, etc. Please refer to the [associated paper](https://www.sciencedirect.com/science/article/pii/S0950705124005501) for more details.
CrisisTransformers were evaluated on 18 public crisis-specific datasets against strong baselines. Our pre-trained models outperform the baselines across all 18 datasets in classification tasks, and our best-performing sentence-encoder (mono-lingual) outperforms the state-of-the-art by more than 17\% in sentence encoding tasks. The multi-lingual sentence encoders (support 50+ languages; see [associated paper](https://arxiv.org/abs/2403.16614)) are designed to approximate the embedding space of the best-performing mono-lingual sentence encoder.
## Uses
CrisisTransformers has 8 pre-trained models, 1 mono-lingual and 2 multi-lingual sentence encoders. The pre-trained models should be finetuned for downstream tasks just like [BERT](https://huggingface.co/bert-base-cased) and [RoBERTa](https://huggingface.co/roberta-base). The sentence encoders can be used out-of-the-box just like [Sentence-Transformers](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) for sentence encoding to facilitate tasks such as semantic search, clustering, topic modelling.
## Models and naming conventions
*CT-M1* models were trained from scratch up to 40 epochs, while *CT-M2* models were initialized with pre-trained RoBERTa's weights and *CT-M3* models were initialized with pre-trained BERTweet's weights and both trained for up to 20 epochs. *OneLook* represents the checkpoint after 1 epoch, *BestLoss* represents the checkpoint with the lowest loss during training, and *Complete* represents the checkpoint after completing all epochs. *SE* represents sentence encoder.
| pre-trained model | source |
|--|--|
|CT-M1-BestLoss|[crisistransformers/CT-M1-BestLoss](https://huggingface.co/crisistransformers/CT-M1-BestLoss)|
|CT-M1-Complete|[crisistransformers/CT-M1-Complete](https://huggingface.co/crisistransformers/CT-M1-Complete)|
|CT-M2-OneLook|[crisistransformers/CT-M2-OneLook](https://huggingface.co/crisistransformers/CT-M2-OneLook)|
|CT-M2-BestLoss|[crisistransformers/CT-M2-BestLoss](https://huggingface.co/crisistransformers/CT-M2-BestLoss)|
|CT-M2-Complete|[crisistransformers/CT-M2-Complete](https://huggingface.co/crisistransformers/CT-M2-Complete)|
|CT-M3-OneLook|[crisistransformers/CT-M3-OneLook](https://huggingface.co/crisistransformers/CT-M3-OneLook)|
|CT-M3-BestLoss|[crisistransformers/CT-M3-BestLoss](https://huggingface.co/crisistransformers/CT-M3-BestLoss)|
|CT-M3-Complete|[crisistransformers/CT-M3-Complete](https://huggingface.co/crisistransformers/CT-M3-Complete)|
| sentence encoder | source |
|--|--|
|CT-M1-Complete-SE (mono-lingual: EN)|[crisistransformers/CT-M1-Complete-SE](https://huggingface.co/crisistransformers/CT-M1-Complete-SE)|
|CT-XLMR-SE (multi-lingual)|[crisistransformers/CT-XLMR-SE](https://huggingface.co/crisistransformers/CT-XLMR-SE)|
|CT-mBERT-SE (multi-lingual)|[crisistransformers/CT-mBERT-SE](https://huggingface.co/crisistransformers/CT-mBERT-SE)|
Languages supported by the multi-lingual sentence encoders: Albanian, Arabic, Armenian, Bulgarian, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, Estonian, Finnish, French, French (Canada), Galician, Georgian, German, Greek, Gujarati, Hebrew, Hindi, Hungarian, Indonesian, Italian, Japanese, Korean, Kurdish (Sorani), Latvian, Lithuanian, Macedonian, Malay, Marathi, Mongolian, Myanmar (Burmese), Norwegian, Persian, Polish, Portuguese, Portuguese (Brazil), Romanian, Russian, Serbian, Slovak, Slovenian, Spanish, Swedish, Thai, Turkish, Ukrainian, Urdu, and Vietnamese.
## Citation
If you use CrisisTransformers and the mono-lingual sentence encoder, please cite the following paper:
```
@article{lamsal2023crisistransformers,
title={CrisisTransformers: Pre-trained language models and sentence encoders for crisis-related social media texts},
author={Rabindra Lamsal and
Maria Rodriguez Read and
Shanika Karunasekera},
journal={Knowledge-Based Systems},
pages={111916},
year={2024},
publisher={Elsevier}
}
```
If you use the multi-lingual sentence encoders, please cite the following paper:
```
@article{lamsal2024semantically,
title={Semantically Enriched Cross-Lingual Sentence Embeddings for Crisis-related Social Media Texts},
author={Rabindra Lamsal and
Maria Rodriguez Read and
Shanika Karunasekera},
year={2024},
eprint={2403.16614},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Sorour/cls_sentiment_mistral_v1 | Sorour | 2024-05-20T01:12:58Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-05-20T00:24:07Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.2
datasets:
- generator
model-index:
- name: cls_sentiment_mistral_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cls_sentiment_mistral_v1
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5972
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7365 | 0.1986 | 50 | 0.7344 |
| 0.6778 | 0.3972 | 100 | 0.6852 |
| 0.6548 | 0.5958 | 150 | 0.6588 |
| 0.6728 | 0.7944 | 200 | 0.6333 |
| 0.6148 | 0.9930 | 250 | 0.6106 |
| 0.43 | 1.1917 | 300 | 0.6174 |
| 0.4575 | 1.3903 | 350 | 0.6081 |
| 0.4225 | 1.5889 | 400 | 0.6058 |
| 0.4136 | 1.7875 | 450 | 0.5976 |
| 0.441 | 1.9861 | 500 | 0.5972 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
WhoTookMyAmogusNickname/llama2-7b-megacode2_min100-GGML | WhoTookMyAmogusNickname | 2024-05-20T01:11:13Z | 0 | 0 | null | [
"region:us"
] | null | 2023-08-13T05:34:54Z |
[llama2-7b-megacode2_min100](https://huggingface.co/andreaskoepf/llama2-7b-megacode2_min100) converted and quantized to GGML\
had to use a "[added_tokens.json](https://huggingface.co/andreaskoepf/llama2-7b-oasst-baseline/blob/main/added_tokens.json)" from another of their models, as the vocab size is strangely 32007 |
Solshine/llama3_SOAP_Notes_06_lora_model | Solshine | 2024-05-20T01:08:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-20T01:08:24Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** Solshine
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
St4n/wav2vec2-base-self-519-colab-3-grams | St4n | 2024-05-20T01:01:12Z | 82 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:audiofolder",
"base_model:facebook/wav2vec2-base-960h",
"base_model:finetune:facebook/wav2vec2-base-960h",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-20T01:00:05Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-base-960h
tags:
- generated_from_trainer
datasets:
- audiofolder
metrics:
- wer
model-index:
- name: wav2vec2-base-self-331-colab
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: audiofolder
type: audiofolder
config: default
split: test
args: default
metrics:
- name: Wer
type: wer
value: 0.15007215007215008
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-self-331-colab
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3282
- Wer: 0.1501
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 300
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 2.3444 | 30.77 | 200 | 2.1940 | 0.9841 |
| 1.972 | 61.54 | 400 | 1.4582 | 0.8167 |
| 1.3875 | 92.31 | 600 | 0.8476 | 0.5902 |
| 0.9092 | 123.08 | 800 | 0.5445 | 0.3636 |
| 0.6382 | 153.85 | 1000 | 0.4129 | 0.2641 |
| 0.5789 | 184.62 | 1200 | 0.3497 | 0.1876 |
| 0.4632 | 215.38 | 1400 | 0.3478 | 0.1616 |
| 0.4474 | 246.15 | 1600 | 0.3394 | 0.1486 |
| 0.429 | 276.92 | 1800 | 0.3282 | 0.1501 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
jsfamily/korean-small_t33 | jsfamily | 2024-05-20T00:59:16Z | 96 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"ko",
"dataset:korean_samll_dataset3",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-20T00:57:22Z | ---
language:
- ko
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
base_model: openai/whisper-small
datasets:
- korean_samll_dataset3
model-index:
- name: korean-small_t33
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# korean-small_t33
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the korean_samll_dataset3 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.1656
- eval_cer: 6.6580
- eval_runtime: 2081.9667
- eval_samples_per_second: 3.128
- eval_steps_per_second: 0.391
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
ytcheng/llama-3-8B-pretrain_v2 | ytcheng | 2024-05-20T00:56:09Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T00:52:01Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
head-empty-ai/Mytho-Lemon-11B | head-empty-ai | 2024-05-20T00:55:38Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:KatyTheCutie/LemonadeRP-4.5.3",
"base_model:finetune:KatyTheCutie/LemonadeRP-4.5.3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-19T20:45:35Z | ---
base_model:
- KatyTheCutie/LemonadeRP-4.5.3
library_name: transformers
tags:
- mergekit
- merge
---
# Mytho-Lemon-11B
Just a simple 11B frankenmerge of LemonadeRP and MythoMist which was used in [matchaaaaa/Chaifighter-20B-v2](https://huggingface.co/matchaaaaa/Chaifighter-20B-v2).
I didn't have to merge the models like this in Chaifighter, but I already had this lying around from a previous attempt, so I just went with it. It's nothing special, but here it is!
## Merge Details
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [Gryphe/MythoMist-7B](https://huggingface.co/Gryphe/MythoMist-7b)
* [KatyTheCutie/LemonadeRP-4.5.3](https://huggingface.co/KatyTheCutie/LemonadeRP-4.5.3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: KatyTheCutie/LemonadeRP-4.5.3
layer_range: [0, 24]
- sources:
- model: Gryphe/MythoMist-7B
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```
Anyway, have a great day! |
lomov/targetsandgoalsv1 | lomov | 2024-05-20T00:52:59Z | 124 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"autotrain",
"dataset:targetsandgoalsv1/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-20T00:51:53Z |
---
tags:
- autotrain
- text-classification
widget:
- text: "I love AutoTrain"
datasets:
- targetsandgoalsv1/autotrain-data
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.22812850773334503
f1_macro: 0.928605054676046
f1_micro: 0.9313725490196079
f1_weighted: 0.9297769573887364
precision_macro: 0.9294524189261031
precision_micro: 0.9313725490196079
precision_weighted: 0.930390072030939
recall_macro: 0.93
recall_micro: 0.9313725490196079
recall_weighted: 0.9313725490196079
accuracy: 0.9313725490196079
|
souvik0306/test_quant_merge_facebook_opt | souvik0306 | 2024-05-20T00:51:06Z | 84 | 1 | transformers | [
"transformers",
"safetensors",
"opt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] | text-generation | 2024-05-20T00:50:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lomov/strategytransitionplanv1 | lomov | 2024-05-20T00:48:16Z | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"autotrain",
"dataset:strategytransitionplanv1/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-20T00:47:26Z |
---
tags:
- autotrain
- text-classification
widget:
- text: "I love AutoTrain"
datasets:
- strategytransitionplanv1/autotrain-data
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.12461505830287933
f1_macro: 0.9837010534684953
f1_micro: 0.9838709677419355
f1_weighted: 0.9838517321638102
precision_macro: 0.9848484848484849
precision_micro: 0.9838709677419355
precision_weighted: 0.9846041055718475
recall_macro: 0.9833333333333334
recall_micro: 0.9838709677419355
recall_weighted: 0.9838709677419355
accuracy: 0.9838709677419355
|
lomov/strategydisofrisksv1 | lomov | 2024-05-20T00:45:50Z | 112 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"autotrain",
"dataset:strategydisofrisksv1/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-20T00:44:53Z |
---
tags:
- autotrain
- text-classification
widget:
- text: "I love AutoTrain"
datasets:
- strategydisofrisksv1/autotrain-data
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.15990132093429565
f1_macro: 0.974930590799176
f1_micro: 0.975609756097561
f1_weighted: 0.9752584323168365
precision_macro: 0.9767316017316017
precision_micro: 0.975609756097561
precision_weighted: 0.9767447999155316
recall_macro: 0.975
recall_micro: 0.975609756097561
recall_weighted: 0.975609756097561
accuracy: 0.975609756097561
|
OscarGalavizC/roberta-base-bne-finetuned-multi-sentiment | OscarGalavizC | 2024-05-20T00:45:17Z | 109 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:BSC-LT/roberta-base-bne",
"base_model:finetune:BSC-LT/roberta-base-bne",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-16T20:36:26Z | ---
license: apache-2.0
base_model: BSC-TeMU/roberta-base-bne
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-bne-finetuned-multi-sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-multi-sentiment
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7116
- Accuracy: 0.6914
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.809 | 1.0 | 115 | 0.7168 | 0.6852 |
| 0.6101 | 2.0 | 230 | 0.7116 | 0.6914 |
### Framework versions
- Transformers 4.40.2
- Pytorch 1.13.1+cu117
- Datasets 2.19.1
- Tokenizers 0.19.1
|
mradermacher/Notaires_dolphin-2.9.1-yi-1.5-9b-GGUF | mradermacher | 2024-05-20T00:43:49Z | 17 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"orpo",
"en",
"base_model:baconnier/Notaires_dolphin-2.9.1-yi-1.5-9b",
"base_model:quantized:baconnier/Notaires_dolphin-2.9.1-yi-1.5-9b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-19T23:46:08Z | ---
base_model: baconnier/Notaires_dolphin-2.9.1-yi-1.5-9b
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- orpo
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/baconnier/Notaires_dolphin-2.9.1-yi-1.5-9b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Notaires_dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/Notaires_dolphin-2.9.1-yi-1.5-9b.Q2_K.gguf) | Q2_K | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Notaires_dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/Notaires_dolphin-2.9.1-yi-1.5-9b.IQ3_XS.gguf) | IQ3_XS | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Notaires_dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/Notaires_dolphin-2.9.1-yi-1.5-9b.Q3_K_S.gguf) | Q3_K_S | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Notaires_dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/Notaires_dolphin-2.9.1-yi-1.5-9b.IQ3_S.gguf) | IQ3_S | 4.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Notaires_dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/Notaires_dolphin-2.9.1-yi-1.5-9b.IQ3_M.gguf) | IQ3_M | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Notaires_dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/Notaires_dolphin-2.9.1-yi-1.5-9b.Q3_K_M.gguf) | Q3_K_M | 4.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Notaires_dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/Notaires_dolphin-2.9.1-yi-1.5-9b.Q3_K_L.gguf) | Q3_K_L | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Notaires_dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/Notaires_dolphin-2.9.1-yi-1.5-9b.IQ4_XS.gguf) | IQ4_XS | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Notaires_dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/Notaires_dolphin-2.9.1-yi-1.5-9b.Q4_K_S.gguf) | Q4_K_S | 5.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Notaires_dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/Notaires_dolphin-2.9.1-yi-1.5-9b.Q4_K_M.gguf) | Q4_K_M | 5.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Notaires_dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/Notaires_dolphin-2.9.1-yi-1.5-9b.Q5_K_S.gguf) | Q5_K_S | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/Notaires_dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/Notaires_dolphin-2.9.1-yi-1.5-9b.Q5_K_M.gguf) | Q5_K_M | 6.4 | |
| [GGUF](https://huggingface.co/mradermacher/Notaires_dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/Notaires_dolphin-2.9.1-yi-1.5-9b.Q6_K.gguf) | Q6_K | 7.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Notaires_dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/Notaires_dolphin-2.9.1-yi-1.5-9b.Q8_0.gguf) | Q8_0 | 9.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Notaires_dolphin-2.9.1-yi-1.5-9b-GGUF/resolve/main/Notaires_dolphin-2.9.1-yi-1.5-9b.f16.gguf) | f16 | 17.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
gnelson/sup-sample-1-phi3-mini | gnelson | 2024-05-20T00:41:02Z | 1 | 0 | null | [
"gguf",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-20T00:16:31Z | ---
license: cc-by-sa-4.0
---
|
tsavage68/MedQA_L3_1000steps_1e6rate_01beat_CSFTDPO | tsavage68 | 2024-05-20T00:35:13Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-20T00:31:07Z | ---
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: MedQA_L3_1000steps_1e6rate_01beat_CSFTDPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MedQA_L3_1000steps_1e6rate_01beat_CSFTDPO
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4018
- Rewards/chosen: -1.1456
- Rewards/rejected: -2.9172
- Rewards/accuracies: 0.7912
- Rewards/margins: 1.7716
- Logps/rejected: -50.4889
- Logps/chosen: -29.6790
- Logits/rejected: -1.3967
- Logits/chosen: -1.3936
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.695 | 0.0489 | 50 | 0.6713 | 0.0342 | -0.0142 | 0.6615 | 0.0484 | -21.4583 | -17.8807 | -0.9400 | -0.9395 |
| 0.6187 | 0.0977 | 100 | 0.5915 | -0.1174 | -0.4200 | 0.7121 | 0.3027 | -25.5168 | -19.3963 | -1.0412 | -1.0403 |
| 0.5652 | 0.1466 | 150 | 0.5103 | -0.6250 | -1.3027 | 0.7495 | 0.6777 | -34.3433 | -24.4723 | -1.1124 | -1.1110 |
| 0.4549 | 0.1954 | 200 | 0.5152 | -1.3616 | -2.3988 | 0.7231 | 1.0372 | -45.3043 | -31.8385 | -1.2048 | -1.2020 |
| 0.4875 | 0.2443 | 250 | 0.4642 | -0.6443 | -1.7506 | 0.7648 | 1.1063 | -38.8228 | -24.6654 | -1.1785 | -1.1765 |
| 0.4433 | 0.2931 | 300 | 0.4453 | -0.8917 | -2.2308 | 0.8044 | 1.3391 | -43.6244 | -27.1394 | -1.2423 | -1.2401 |
| 0.5036 | 0.3420 | 350 | 0.4581 | -0.7568 | -2.0680 | 0.7692 | 1.3112 | -41.9963 | -25.7907 | -1.2182 | -1.2158 |
| 0.6285 | 0.3908 | 400 | 0.4703 | -0.6136 | -1.9063 | 0.7604 | 1.2927 | -40.3798 | -24.3588 | -1.2386 | -1.2361 |
| 0.5726 | 0.4397 | 450 | 0.4732 | -0.4602 | -1.5238 | 0.7692 | 1.0636 | -36.5545 | -22.8248 | -1.2652 | -1.2626 |
| 0.5198 | 0.4885 | 500 | 0.4280 | -0.9825 | -2.4466 | 0.8066 | 1.4641 | -45.7828 | -28.0480 | -1.3426 | -1.3399 |
| 0.3963 | 0.5374 | 550 | 0.4236 | -0.9424 | -2.3856 | 0.8022 | 1.4432 | -45.1725 | -27.6467 | -1.3514 | -1.3488 |
| 0.3233 | 0.5862 | 600 | 0.4127 | -0.9551 | -2.5770 | 0.8000 | 1.6219 | -47.0868 | -27.7738 | -1.3761 | -1.3733 |
| 0.3955 | 0.6351 | 650 | 0.4236 | -0.9988 | -2.7155 | 0.7846 | 1.7167 | -48.4714 | -28.2110 | -1.3837 | -1.3806 |
| 0.3121 | 0.6839 | 700 | 0.4109 | -1.0837 | -2.8282 | 0.7868 | 1.7445 | -49.5986 | -29.0595 | -1.3902 | -1.3871 |
| 0.4809 | 0.7328 | 750 | 0.4060 | -1.1344 | -2.8863 | 0.7846 | 1.7519 | -50.1796 | -29.5667 | -1.3954 | -1.3923 |
| 0.4075 | 0.7816 | 800 | 0.4013 | -1.1649 | -2.9284 | 0.7868 | 1.7635 | -50.6008 | -29.8717 | -1.3971 | -1.3939 |
| 0.584 | 0.8305 | 850 | 0.4014 | -1.1482 | -2.9188 | 0.7890 | 1.7706 | -50.5041 | -29.7042 | -1.3971 | -1.3939 |
| 0.5942 | 0.8793 | 900 | 0.4042 | -1.1517 | -2.9160 | 0.7846 | 1.7643 | -50.4761 | -29.7394 | -1.3965 | -1.3934 |
| 0.3169 | 0.9282 | 950 | 0.4040 | -1.1507 | -2.9162 | 0.7934 | 1.7655 | -50.4786 | -29.7294 | -1.3965 | -1.3934 |
| 0.2727 | 0.9770 | 1000 | 0.4018 | -1.1456 | -2.9172 | 0.7912 | 1.7716 | -50.4889 | -29.6790 | -1.3967 | -1.3936 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.0.0+cu117
- Datasets 2.19.1
- Tokenizers 0.19.1
|
netekiswill/Netekis-will | netekiswill | 2024-05-20T00:30:52Z | 0 | 0 | null | [
"region:us"
] | null | 2024-05-20T00:29:47Z | <p><strong>Winning concerning Nursing Tasks: A Hand made Approach for Progress</strong></p>
<p><strong>Show:</strong></p>
<p>In the referencing universe of nursing getting ready, tasks like NURS FPX 4010 Appraisal 4, NURS FPX 4030 Assessment 2, and NURS FPX 4020 Assessment 4 go probably as basic achievements in outlining understudies' scholastic excursion. Anyway, examining these evaluations can routinely be overpowering, particularly while changing clinical turns, coursework, and individual commitments. Seeing these difficulties, <a href="https://onlineclassassignment.com/do-my-nursing-assignment/">Do My Nursing Assignment</a> associations have arisen as helps for understudies looking for custom fitted help with their shrewd undertakings. In this aide, we research the control of such associations in streamlining nursing endeavors, offering modified manages any outcomes in regards to address the excellent requirements of every single understudy. From research papers to important assessments and introductions, these associations offer cautious help, empowering understudies to succeed instructively and thrive in their nursing studies.</p>
<p><strong>Winning with "Do My Nursing Undertaking" Associations</strong></p>
<p>"Do My Nursing Task" associations offer fitted help to nursing understudies wrestling with the sales of scholarly appraisals. These associations give a help to understudies looking for changed assist in examining tasks with enjoying NURS FPX 4010 Appraisal 4, NURS FPX 4030 Evaluation 2, and NURS FPX 4020 Assessment 4. Whether understudies need support with evaluation, making, or arranging, these associations offer master bearing to guarantee sublime entries that fulfill instructive guidelines.</p>
<p>One of the basic advantages of "Do My Nursing Task" associations is their capacity to re-attempt support as indicated by every understudy's phenomenal necessities. Experienced experts in nursing planning team up with understudies to make sense of assignment necessities, learning goals, and individual propensities. This changed strategy guarantees that understudies get relegated help remarkably created to their instructive targets, enabling a more huge seeing obviously materials and improving generally learning results.</p>
<p>Too, "Do My Nursing Task" associations engage understudies to proficiently deal with their scholastic commitment even more. By re-appropriating express bits of their tasks to prepared experts, understudies can save basic opportunity to zero in on different necessities, for example, clinical positions, research tries, or individual commitments. This smoothed out strategy lessens tension and scholarly strain as well as connects with understudies to accomplish a predominant concordance among fun and serious activities while winning in their nursing studies.</p>
<p><strong>NURS FPX 4010 Assessment 4: Overpowering Check Based Practice in Nursing</strong></p>
<p>NURS FPX 4010 Evaluation 4 is a fundamental piece of nursing planning, zeroing in on proof based practice (EBP) standards. In this <a href="https://www.onlineclassassignment.com/nurs-fpx-4010-assessment-4-stakeholders-presentation/">nurs fpx 4010 assessment 4</a>, understudies are blessed with on an extremely essential level dissecting research writing to illuminate clinical course and nursing mediations. By participating in this cycle, understudies develop essential limits in surveying proof, joining disclosures, and applying them to veritable conditions experienced in clinical practice.</p>
<p>A focal piece of NURS FPX 4010 Evaluation 4 is the mix of appraisal disclosures into nursing practice. Understudies should show their capacity to make an interpretation of confirmation into basic suggestion that advance ideal patient results. This appraisal not just underlines the significance of keeping awake with the most recent with repeating design research yet what's more elements the sincere control of EBP in conveying predominant grade, patient-focused care.</p>
<p>In addition, NURS FPX 4010 Assessment 4 urges understudies to think about the outcomes of EBP for their lord movement as clinical gatekeepers. By on a very basic level assessing research making and its congruity to nursing practice, understudies gain a more critical appreciation for the gig of confirmation in frivolity their clinical choices and mediations. Through this brilliant collaboration, understudies are more prepared to embrace EBP rules as strong students and supporters for quality idea.</p>
<p><strong>NURS FPX 4030 Appraisal 2: Administering Certain level Pharmacology Considerations</strong></p>
<p>NURS FPX 4030 Appraisal 2 jumps into the puzzling area of cutting edge pharmacology contemplations, moving nursing understudies to extend how they could translate pharmacological rules and their application in clinical practice. This assessment guesses that understudies ought to show capacity in pharmacokinetics, pharmacodynamics, drug joint endeavors, and repulsive impacts across different solution classes. Through concentrated assessment and assessment, understudies refine their pharmacological information and urge unequivocal capacities to think fundamental for got and possible prescription the board.</p>
<p>A place of combination of <a href="https://www.onlineclassassignment.com/nurs-fpx-4030-assessment-2-determining-the-credibility-of-evidence-and-resources/">nurs fpx 4030 assessment 2</a> is the joining of check based practice into pharmacological course. Understudies are enriched with assessing stream research writing to train their comprehension concerning calm prescriptions, helpful purposes, and nursing thoughts. By organizing affirmation from different sources, understudies figure out a smart approach with informed clinical decisions, guaranteeing ideal patient results while sticking to moral and genuine contemplations.</p>
<p>Also, NURS FPX 4030 Evaluation 2 nerves the significance of interdisciplinary support in pharmacological idea. Understudies take part in conversations and setting focused assessments that feature the control of clinical advantages packs in medicine the bosses, underlining productive correspondence, and joint effort. By seeing the obligations of different clinical advantages trained professionals, understudies gain understanding into the careful strategy for overseeing patient idea and develop limits critical for supportive practice in organized clinical settings.</p>
<p><strong>NURS FPX 4020 Appraisal 4: Keeping an eye out for Neighborhood Needs</strong></p>
<p>NURS FPX 4020 Appraisal 4 brilliant lights on keeping an eye out for neighborhood needs through expansive nursing intercessions. In this evaluation, nursing understudies are supplied with perceiving undeniable clinical issues inside a particular area orchestrating proof based systems to drive success and foil illness. By working together with neighborhood and using epidemiological information, understudies develop exhaustive approaches to overseeing address the remarkable flourishing difficulties looked by changed masses.</p>
<p>A fundamental piece of NURS FPX 4020 Evaluation 4 is the supplement on thriving movement and contamination repudiation. Understudies are approached to investigate different determinants of success, including social, financial, and organic parts, to get a handle on their effect on area. Through the execution of success direction programs, outreach drives, and method support, understudies add to extra cultivating the general thriving outcomes and individual satisfaction for people and associations.</p>
<p>Moreover, <a href="https://www.onlineclassassignment.com/nurs-fpx-4020-assessment-4-improving-quality-of-care-and-patient-safety/">nurs fpx 4020 assessment 4</a> elements the significance of social limit and responsiveness in area nursing practice. Understudies are attempted to examine social convictions, values, and practices while orchestrating mediations, guaranteeing that clinical thought associations are conscious, open, and complete for all area. By embracing combination and social lowliness, understudies foster trust and worked with effort inside the area, last moving flourishing regard and social freedoms.</p>
<p>In light of everything, the utilization of "Do My Nursing Task" associations shows instrumental in researching the complexities of assessments like NURS FPX 4010 Appraisal 4, NURS FPX 4030 Assessment 2, and NURS FPX 4020 Assessment 4. These associations offer essential help to nursing understudies wrestling with the sales of their scholarly cooperation. By offering hand created help, ace heading, and changed strategies, they attract understudies to win in their coursework and accomplish their scholarly objectives. Whether it's decision unquestionable level pharmacology contemplations, watching out for neighborhood needs, or exploring complex nursing tasks, "Do My Nursing Undertaking" associations go probably as very important accessories, empowering understudies to flourish in their nursing studies and genuinely focus on the clinical thought calling.</p> |
anzorq/w2v-bert-2.0-kbd-colab | anzorq | 2024-05-20T00:14:41Z | 79 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-15T23:28:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Subsets and Splits