Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
text-generation | transformers | # cognitivecomputations/dolphin-2.9-llama3-8b-1m AWQ
- Model creator: [cognitivecomputations](https://huggingface.co/cognitivecomputations)
- Original model: [dolphin-2.9-llama3-8b-1m](https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b-1m)
## How to use
### Install the necessary packages
```bash
pip install --upgrade autoawq autoawq-kernels
```
### Example Python code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer
model_path = "solidrust/dolphin-2.9-llama3-8b-1m-AWQ"
system_message = "You are dolphin-2.9-llama3-8b-1m, incarnated as a powerful AI. You were created by cognitivecomputations."
# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
trust_remote_code=True)
streamer = TextStreamer(tokenizer,
skip_prompt=True,
skip_special_tokens=True)
# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""
prompt = "You're standing on the surface of the Earth. "\
"You walk one mile south, one mile west and one mile north. "\
"You end up exactly where you started. Where are you?"
tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
return_tensors='pt').input_ids.cuda()
# Generate output
generation_output = model.generate(tokens,
streamer=streamer,
max_new_tokens=512)
```
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
| {"library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"} | solidrust/dolphin-2.9-llama3-8b-1m-AWQ | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"conversational",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:01:32+00:00 |
null | null | {} | optimum-internal-testing/optimum-neuron-cache-for-testing-arwim | null | [
"region:us"
] | null | 2024-05-03T16:02:09+00:00 |
|
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama-2-ko-7b - bnb 8bits
- Model creator: https://huggingface.co/beomi/
- Original model: https://huggingface.co/beomi/llama-2-ko-7b/
Original model description:
---
language:
- en
- ko
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
- kollama
- llama-2-ko
---
**Update Log**
- 2023.12.27
- New Model is here! Trained with only open-accessible Korean text corpus: https://huggingface.co/beomi/open-llama-2-ko-7b
- 2023.10.19
- Fix Tokenizer bug(space not applied when decoding) after `transforemrs>=4.34.0`
# **Llama-2-Ko** ๐ฆ๐ฐ๐ท
Llama-2-Ko serves as an advanced iteration of Llama 2, benefiting from an expanded vocabulary and the inclusion of a Korean corpus in its further pretraining. Just like its predecessor, Llama-2-Ko operates within the broad range of generative text models that stretch from 7 billion to 70 billion parameters. This repository focuses on the 7B pretrained version, which is tailored to fit the Hugging Face Transformers format. For access to the other models, feel free to consult the index provided below.
## Model Details
**Model Developers** Junbum Lee (Beomi)
**Variations** Llama-2-Ko will come in a range of parameter sizes โ 7B, 13B, and 70B โ as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
Llama-2-Ko is an auto-regressive language model that uses an optimized transformer architecture based on Llama-2.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of Korean online data*|7B|4k|✗|>40B*|1e<sup>-5</sup>|
*Plan to train upto 200B tokens
**Vocab Expansion**
| Model Name | Vocabulary Size | Description |
| --- | --- | --- |
| Original Llama-2 | 32000 | Sentencepiece BPE |
| **Expanded Llama-2-Ko** | 46336 | Sentencepiece BPE. Added Korean vocab and merges |
**Tokenizing "์๋
ํ์ธ์, ์ค๋์ ๋ ์จ๊ฐ ์ข๋ค์."**
| Model | Tokens |
| --- | --- |
| Llama-2 | `['โ', '์', '<0xEB>', '<0x85>', '<0x95>', 'ํ', '์ธ', '์', ',', 'โ', '์ค', '<0xEB>', '<0x8A>', '<0x98>', '์', 'โ', '<0xEB>', '<0x82>', '<0xA0>', '์จ', '๊ฐ', 'โ', '<0xEC>', '<0xA2>', '<0x8B>', '<0xEB>', '<0x84>', '<0xA4>', '์']` |
| Llama-2-Ko | `['โ์๋
', 'ํ์ธ์', ',', 'โ์ค๋์', 'โ๋ ', '์จ๊ฐ', 'โ์ข๋ค์']` |
**Tokenizing "Llama 2: Open Foundation and Fine-Tuned Chat Models"**
| Model | Tokens |
| --- | --- |
| Llama-2 | `['โL', 'l', 'ama', 'โ', '2', ':', 'โOpen', 'โFoundation', 'โand', 'โFine', '-', 'T', 'un', 'ed', 'โCh', 'at', 'โMod', 'els']` |
| Llama-2-Ko | `['โL', 'l', 'ama', 'โ', '2', ':', 'โOpen', 'โFoundation', 'โand', 'โFine', '-', 'T', 'un', 'ed', 'โCh', 'at', 'โMod', 'els']` |
# **Model Benchmark**
## LM Eval Harness - Korean (polyglot branch)
- Used EleutherAI's lm-evaluation-harness https://github.com/EleutherAI/lm-evaluation-harness/tree/polyglot
### NSMC (Acc) - 50000 full test
TBD
### COPA (F1)
<img src=https://user-images.githubusercontent.com/11323660/255575809-c037bc6e-0566-436a-a6c1-2329ac92187a.png style="max-width: 700px; width: 100%" />
| Model | 0-shot | 5-shot | 10-shot | 50-shot |
| --- | --- | --- | --- | --- |
| https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5 | 0.6696 | 0.6477 | 0.6419 | 0.6514 |
| https://huggingface.co/kakaobrain/kogpt | 0.7345 | 0.7287 | 0.7277 | 0.7479 |
| https://huggingface.co/facebook/xglm-7.5B | 0.6723 | 0.6731 | 0.6769 | 0.7119 |
| https://huggingface.co/EleutherAI/polyglot-ko-1.3b | 0.7196 | 0.7193 | 0.7204 | 0.7206 |
| https://huggingface.co/EleutherAI/polyglot-ko-3.8b | 0.7595 | 0.7608 | 0.7638 | 0.7788 |
| https://huggingface.co/EleutherAI/polyglot-ko-5.8b | 0.7745 | 0.7676 | 0.7775 | 0.7887 |
| https://huggingface.co/EleutherAI/polyglot-ko-12.8b | 0.7937 | 0.8108 | 0.8037 | 0.8369 |
| Llama-2 Original 7B* | 0.562033 | 0.575982 | 0.576216 | 0.595532 |
| Llama-2-Ko-7b 20B (10k) | 0.738780 | 0.762639 | 0.780761 | 0.797863 |
| Llama-2-Ko-7b 40B (20k) | 0.743630 | 0.792716 | 0.803746 | 0.825944 |
*Llama-2 Original 7B used https://huggingface.co/meta-llama/Llama-2-7b-hf (w/o tokenizer updated)
### HellaSwag (F1)
<img src=https://user-images.githubusercontent.com/11323660/255576090-a2bfc1ae-d117-44b7-9f7b-262e41179ec1.png style="max-width: 700px; width: 100%" />
| Model | 0-shot | 5-shot | 10-shot | 50-shot |
| --- | --- | --- | --- | --- |
| https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5 | 0.5243 | 0.5272 | 0.5166 | 0.5352 |
| https://huggingface.co/kakaobrain/kogpt | 0.5590 | 0.5833 | 0.5828 | 0.5907 |
| https://huggingface.co/facebook/xglm-7.5B | 0.5665 | 0.5689 | 0.5565 | 0.5622 |
| https://huggingface.co/EleutherAI/polyglot-ko-1.3b | 0.5247 | 0.5260 | 0.5278 | 0.5427 |
| https://huggingface.co/EleutherAI/polyglot-ko-3.8b | 0.5707 | 0.5830 | 0.5670 | 0.5787 |
| https://huggingface.co/EleutherAI/polyglot-ko-5.8b | 0.5976 | 0.5998 | 0.5979 | 0.6208 |
| https://huggingface.co/EleutherAI/polyglot-ko-12.8b | 0.5954 | 0.6306 | 0.6098 | 0.6118 |
| Llama-2 Original 7B* | 0.415390 | 0.431382 | 0.421342 | 0.442003 |
| Llama-2-Ko-7b 20B (10k) | 0.451757 | 0.466751 | 0.472607 | 0.482776 |
| Llama-2-Ko-7b 40B (20k) | 0.456246 | 0.465665 | 0.469810 | 0.477374 |
*Llama-2 Original 7B used https://huggingface.co/meta-llama/Llama-2-7b-hf (w/o tokenizer updated)
### BoolQ (F1)
<img src=https://user-images.githubusercontent.com/11323660/255576343-5d847a6f-3b6a-41a7-af37-0f11940a5ea4.png style="max-width: 700px; width: 100%" />
| Model | 0-shot | 5-shot | 10-shot | 50-shot |
| --- | --- | --- | --- | --- |
| https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5 | 0.3356 | 0.4014 | 0.3640 | 0.3560 |
| https://huggingface.co/kakaobrain/kogpt | 0.4514 | 0.5981 | 0.5499 | 0.5202 |
| https://huggingface.co/facebook/xglm-7.5B | 0.4464 | 0.3324 | 0.3324 | 0.3324 |
| https://huggingface.co/EleutherAI/polyglot-ko-1.3b | 0.3552 | 0.4751 | 0.4109 | 0.4038 |
| https://huggingface.co/EleutherAI/polyglot-ko-3.8b | 0.4320 | 0.5263 | 0.4930 | 0.4038 |
| https://huggingface.co/EleutherAI/polyglot-ko-5.8b | 0.4356 | 0.5698 | 0.5187 | 0.5236 |
| https://huggingface.co/EleutherAI/polyglot-ko-12.8b | 0.4818 | 0.6041 | 0.6289 | 0.6448 |
| Llama-2 Original 7B* | 0.352050 | 0.563238 | 0.474788 | 0.419222 |
| Llama-2-Ko-7b 20B (10k) | 0.360656 | 0.679743 | 0.680109 | 0.662152 |
| Llama-2-Ko-7b 40B (20k) | 0.578640 | 0.697747 | 0.708358 | 0.714423 |
*Llama-2 Original 7B used https://huggingface.co/meta-llama/Llama-2-7b-hf (w/o tokenizer updated)
### SentiNeg (F1)
<img src=https://user-images.githubusercontent.com/11323660/255576572-b005a81d-fa4d-4709-b48a-f0fe4eed17a3.png style="max-width: 700px; width: 100%" />
| Model | 0-shot | 5-shot | 10-shot | 50-shot |
| --- | --- | --- | --- | --- |
| https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5 | 0.6065 | 0.6878 | 0.7280 | 0.8413 |
| https://huggingface.co/kakaobrain/kogpt | 0.3747 | 0.8942 | 0.9294 | 0.9698 |
| https://huggingface.co/facebook/xglm-7.5B | 0.3578 | 0.4471 | 0.3964 | 0.5271 |
| https://huggingface.co/EleutherAI/polyglot-ko-1.3b | 0.6790 | 0.6257 | 0.5514 | 0.7851 |
| https://huggingface.co/EleutherAI/polyglot-ko-3.8b | 0.4858 | 0.7950 | 0.7320 | 0.7851 |
| https://huggingface.co/EleutherAI/polyglot-ko-5.8b | 0.3394 | 0.8841 | 0.8808 | 0.9521 |
| https://huggingface.co/EleutherAI/polyglot-ko-12.8b | 0.9117 | 0.9015 | 0.9345 | 0.9723 |
| Llama-2 Original 7B* | 0.347502 | 0.529124 | 0.480641 | 0.788457 |
| Llama-2-Ko-7b 20B (10k) | 0.485546 | 0.829503 | 0.871141 | 0.851253 |
| Llama-2-Ko-7b 40B (20k) | 0.459447 | 0.761079 | 0.727611 | 0.936988 |
*Llama-2 Original 7B used https://huggingface.co/meta-llama/Llama-2-7b-hf (w/o tokenizer updated)
## Note for oobabooga/text-generation-webui
Remove `ValueError` at `load_tokenizer` function(line 109 or near), in `modules/models.py`.
```python
diff --git a/modules/models.py b/modules/models.py
index 232d5fa..de5b7a0 100644
--- a/modules/models.py
+++ b/modules/models.py
@@ -106,7 +106,7 @@ def load_tokenizer(model_name, model):
trust_remote_code=shared.args.trust_remote_code,
use_fast=False
)
- except ValueError:
+ except:
tokenizer = AutoTokenizer.from_pretrained(
path_to_model,
trust_remote_code=shared.args.trust_remote_code,
```
Since Llama-2-Ko uses FastTokenizer provided by HF tokenizers NOT sentencepiece package,
it is required to use `use_fast=True` option when initialize tokenizer.
Apple Sillicon does not support BF16 computing, use CPU instead. (BF16 is supported when using NVIDIA GPU)
## Citation
```
@misc {l._junbum_2023,
author = { {L. Junbum} },
title = { llama-2-ko-7b (Revision 4a9993e) },
year = 2023,
url = { https://huggingface.co/beomi/llama-2-ko-7b },
doi = { 10.57967/hf/1098 },
publisher = { Hugging Face }
}
```
## Acknowledgement
The training is supported by [TPU Research Cloud](https://sites.research.google/trc/) program.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_beomi__llama-2-ko-7b)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 39.43 |
| ARC (25-shot) | 48.46 |
| HellaSwag (10-shot) | 75.28 |
| MMLU (5-shot) | 39.56 |
| TruthfulQA (0-shot) | 34.49 |
| Winogrande (5-shot) | 72.14 |
| GSM8K (5-shot) | 1.97 |
| DROP (3-shot) | 4.1 |
| {} | RichardErkhov/beomi_-_llama-2-ko-7b-8bits | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-05-03T16:02:42+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | golf2248/uj5jfvc | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:03:11+00:00 |
text-generation | transformers | {} | sanchit-gandhi/Mistral-1.5B-Instruct-v0.2-first-6 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:03:11+00:00 |
|
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | mehdisebai/CodeQwen-text-to-rule | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:03:14+00:00 |
fill-mask | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | CLASS-MATE/BERT-MLM-multilingual-cased | null | [
"transformers",
"safetensors",
"bert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:03:20+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4-seqsight_65536_512_47M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2880
- F1 Score: 0.8934
- Accuracy: 0.8932
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.3756 | 2.17 | 200 | 0.2960 | 0.8851 | 0.8850 |
| 0.2905 | 4.35 | 400 | 0.3002 | 0.8839 | 0.8836 |
| 0.2784 | 6.52 | 600 | 0.2925 | 0.8883 | 0.8884 |
| 0.2756 | 8.7 | 800 | 0.3025 | 0.8826 | 0.8823 |
| 0.2605 | 10.87 | 1000 | 0.2880 | 0.8905 | 0.8905 |
| 0.2556 | 13.04 | 1200 | 0.2872 | 0.8961 | 0.8960 |
| 0.2488 | 15.22 | 1400 | 0.2880 | 0.8949 | 0.8946 |
| 0.2435 | 17.39 | 1600 | 0.2970 | 0.8908 | 0.8905 |
| 0.2394 | 19.57 | 1800 | 0.2875 | 0.8992 | 0.8994 |
| 0.2346 | 21.74 | 2000 | 0.2896 | 0.8962 | 0.8960 |
| 0.2279 | 23.91 | 2200 | 0.3006 | 0.8921 | 0.8919 |
| 0.2257 | 26.09 | 2400 | 0.2989 | 0.8874 | 0.8871 |
| 0.2218 | 28.26 | 2600 | 0.2959 | 0.8921 | 0.8919 |
| 0.2164 | 30.43 | 2800 | 0.2968 | 0.8907 | 0.8905 |
| 0.215 | 32.61 | 3000 | 0.2993 | 0.9015 | 0.9014 |
| 0.2115 | 34.78 | 3200 | 0.3025 | 0.8913 | 0.8912 |
| 0.2074 | 36.96 | 3400 | 0.3011 | 0.8981 | 0.8980 |
| 0.2052 | 39.13 | 3600 | 0.3075 | 0.8901 | 0.8898 |
| 0.2008 | 41.3 | 3800 | 0.3079 | 0.8934 | 0.8932 |
| 0.1992 | 43.48 | 4000 | 0.3171 | 0.8913 | 0.8912 |
| 0.1985 | 45.65 | 4200 | 0.3199 | 0.8858 | 0.8857 |
| 0.1912 | 47.83 | 4400 | 0.3155 | 0.8891 | 0.8891 |
| 0.1905 | 50.0 | 4600 | 0.3100 | 0.8940 | 0.8939 |
| 0.19 | 52.17 | 4800 | 0.3164 | 0.8884 | 0.8884 |
| 0.1915 | 54.35 | 5000 | 0.3157 | 0.8935 | 0.8932 |
| 0.185 | 56.52 | 5200 | 0.3235 | 0.8887 | 0.8884 |
| 0.1806 | 58.7 | 5400 | 0.3242 | 0.8900 | 0.8898 |
| 0.1809 | 60.87 | 5600 | 0.3224 | 0.8880 | 0.8877 |
| 0.1787 | 63.04 | 5800 | 0.3286 | 0.8866 | 0.8864 |
| 0.1788 | 65.22 | 6000 | 0.3372 | 0.8859 | 0.8857 |
| 0.1762 | 67.39 | 6200 | 0.3454 | 0.8779 | 0.8775 |
| 0.1732 | 69.57 | 6400 | 0.3405 | 0.8826 | 0.8823 |
| 0.171 | 71.74 | 6600 | 0.3395 | 0.8914 | 0.8912 |
| 0.1726 | 73.91 | 6800 | 0.3427 | 0.8873 | 0.8871 |
| 0.169 | 76.09 | 7000 | 0.3593 | 0.8820 | 0.8816 |
| 0.1688 | 78.26 | 7200 | 0.3436 | 0.8846 | 0.8843 |
| 0.1674 | 80.43 | 7400 | 0.3411 | 0.8900 | 0.8898 |
| 0.1641 | 82.61 | 7600 | 0.3489 | 0.8892 | 0.8891 |
| 0.1646 | 84.78 | 7800 | 0.3523 | 0.8887 | 0.8884 |
| 0.1646 | 86.96 | 8000 | 0.3448 | 0.8899 | 0.8898 |
| 0.1629 | 89.13 | 8200 | 0.3592 | 0.8846 | 0.8843 |
| 0.1632 | 91.3 | 8400 | 0.3495 | 0.8865 | 0.8864 |
| 0.1578 | 93.48 | 8600 | 0.3543 | 0.8893 | 0.8891 |
| 0.1598 | 95.65 | 8800 | 0.3551 | 0.8878 | 0.8877 |
| 0.1612 | 97.83 | 9000 | 0.3517 | 0.8886 | 0.8884 |
| 0.163 | 100.0 | 9200 | 0.3541 | 0.8846 | 0.8843 |
| 0.163 | 102.17 | 9400 | 0.3523 | 0.8886 | 0.8884 |
| 0.1584 | 104.35 | 9600 | 0.3553 | 0.8880 | 0.8877 |
| 0.1561 | 106.52 | 9800 | 0.3557 | 0.8865 | 0.8864 |
| 0.1564 | 108.7 | 10000 | 0.3554 | 0.8886 | 0.8884 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H4-seqsight_65536_512_47M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4-seqsight_65536_512_47M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:03:23+00:00 |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.0001_withdpo_4iters_bs256_5102lr_misit_iter_2
This model is a fine-tuned version of [ShenaoZ/0.0001_withdpo_4iters_bs256_5102lr_misit_iter_1](https://huggingface.co/ShenaoZ/0.0001_withdpo_4iters_bs256_5102lr_misit_iter_1) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZ/0.0001_withdpo_4iters_bs256_5102lr_misit_iter_1", "model-index": [{"name": "0.0001_withdpo_4iters_bs256_5102lr_misit_iter_2", "results": []}]} | ShenaoZ/0.0001_withdpo_4iters_bs256_5102lr_misit_iter_2 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZ/0.0001_withdpo_4iters_bs256_5102lr_misit_iter_1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:03:29+00:00 |
null | null | {} | optimum-internal-testing/optimum-neuron-cache-for-testing-ummro | null | [
"region:us"
] | null | 2024-05-03T16:03:38+00:00 |
|
null | transformers |
# Uploaded model
- **Developed by:** animaRegem
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2b-it-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "gemma", "trl"], "base_model": "unsloth/gemma-2b-it-bnb-4bit"} | animaRegem/gemma-2b-it-lora-0_1-malayalam | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-2b-it-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:04:03+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4-seqsight_65536_512_47M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2723
- F1 Score: 0.8953
- Accuracy: 0.8953
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.3593 | 2.17 | 200 | 0.2935 | 0.8873 | 0.8871 |
| 0.2808 | 4.35 | 400 | 0.2879 | 0.8960 | 0.8960 |
| 0.2632 | 6.52 | 600 | 0.2894 | 0.8888 | 0.8891 |
| 0.252 | 8.7 | 800 | 0.2906 | 0.8888 | 0.8884 |
| 0.2352 | 10.87 | 1000 | 0.2793 | 0.9110 | 0.9110 |
| 0.2293 | 13.04 | 1200 | 0.2952 | 0.8901 | 0.8898 |
| 0.2172 | 15.22 | 1400 | 0.2890 | 0.8948 | 0.8946 |
| 0.2113 | 17.39 | 1600 | 0.3144 | 0.8909 | 0.8905 |
| 0.2004 | 19.57 | 1800 | 0.3055 | 0.8945 | 0.8946 |
| 0.1942 | 21.74 | 2000 | 0.3162 | 0.8907 | 0.8905 |
| 0.1835 | 23.91 | 2200 | 0.3497 | 0.8696 | 0.8693 |
| 0.1786 | 26.09 | 2400 | 0.3230 | 0.8819 | 0.8816 |
| 0.1698 | 28.26 | 2600 | 0.3381 | 0.8858 | 0.8857 |
| 0.1611 | 30.43 | 2800 | 0.3506 | 0.8852 | 0.8850 |
| 0.1532 | 32.61 | 3000 | 0.3809 | 0.8799 | 0.8802 |
| 0.1489 | 34.78 | 3200 | 0.3671 | 0.8791 | 0.8789 |
| 0.1385 | 36.96 | 3400 | 0.3798 | 0.8786 | 0.8782 |
| 0.1347 | 39.13 | 3600 | 0.3871 | 0.8758 | 0.8754 |
| 0.1278 | 41.3 | 3800 | 0.4102 | 0.8761 | 0.8761 |
| 0.1241 | 43.48 | 4000 | 0.4262 | 0.8790 | 0.8789 |
| 0.1173 | 45.65 | 4200 | 0.4611 | 0.8715 | 0.8720 |
| 0.1122 | 47.83 | 4400 | 0.4375 | 0.8797 | 0.8795 |
| 0.11 | 50.0 | 4600 | 0.4266 | 0.8786 | 0.8789 |
| 0.1039 | 52.17 | 4800 | 0.4801 | 0.8736 | 0.8734 |
| 0.1057 | 54.35 | 5000 | 0.4509 | 0.8775 | 0.8775 |
| 0.0953 | 56.52 | 5200 | 0.4760 | 0.8717 | 0.8713 |
| 0.0926 | 58.7 | 5400 | 0.5029 | 0.8683 | 0.8679 |
| 0.0903 | 60.87 | 5600 | 0.4814 | 0.8722 | 0.8720 |
| 0.0863 | 63.04 | 5800 | 0.5023 | 0.8729 | 0.8727 |
| 0.0856 | 65.22 | 6000 | 0.5227 | 0.8670 | 0.8665 |
| 0.0833 | 67.39 | 6200 | 0.5262 | 0.8677 | 0.8672 |
| 0.0783 | 69.57 | 6400 | 0.5150 | 0.8695 | 0.8693 |
| 0.0761 | 71.74 | 6600 | 0.5296 | 0.8734 | 0.8734 |
| 0.0727 | 73.91 | 6800 | 0.5547 | 0.8704 | 0.8700 |
| 0.0705 | 76.09 | 7000 | 0.5961 | 0.8663 | 0.8658 |
| 0.0718 | 78.26 | 7200 | 0.5728 | 0.8608 | 0.8604 |
| 0.0666 | 80.43 | 7400 | 0.5711 | 0.8695 | 0.8693 |
| 0.0657 | 82.61 | 7600 | 0.5681 | 0.8652 | 0.8652 |
| 0.0638 | 84.78 | 7800 | 0.5880 | 0.8697 | 0.8693 |
| 0.0616 | 86.96 | 8000 | 0.5926 | 0.8695 | 0.8693 |
| 0.0638 | 89.13 | 8200 | 0.5964 | 0.8641 | 0.8638 |
| 0.0638 | 91.3 | 8400 | 0.5819 | 0.8708 | 0.8706 |
| 0.0594 | 93.48 | 8600 | 0.5993 | 0.8680 | 0.8679 |
| 0.0574 | 95.65 | 8800 | 0.5968 | 0.8675 | 0.8672 |
| 0.0586 | 97.83 | 9000 | 0.5952 | 0.8640 | 0.8638 |
| 0.0584 | 100.0 | 9200 | 0.6028 | 0.8614 | 0.8611 |
| 0.0583 | 102.17 | 9400 | 0.6088 | 0.8640 | 0.8638 |
| 0.0575 | 104.35 | 9600 | 0.6062 | 0.8682 | 0.8679 |
| 0.0576 | 106.52 | 9800 | 0.6077 | 0.8668 | 0.8665 |
| 0.0553 | 108.7 | 10000 | 0.6073 | 0.8667 | 0.8665 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H4-seqsight_65536_512_47M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4-seqsight_65536_512_47M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:04:04+00:00 |
text-generation | transformers |
# Llama-3-8B-Irene-v0.1
<img src="https://huggingface.co/Virt-io/Llama-3-8B-Irene-v0.1/resolve/main/Irene.png">
[SillyTavern Presest](https://huggingface.co/Virt-io/SillyTavern-Presets)
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [NeverSleep/Llama-3-Lumimaid-8B-v0.1](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [ResplendentAI/SOVL_Llama3_8B](https://huggingface.co/ResplendentAI/SOVL_Llama3_8B)
* [ChaoticNeutrals/Poppy_Porpoise-0.72-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-0.72-L3-8B)
* [cgato/L3-TheSpice-8b-v0.8.3](https://huggingface.co/cgato/L3-TheSpice-8b-v0.8.3)
* [Endevor/InfinityRP-v2-8B](https://huggingface.co/Endevor/InfinityRP-v2-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: NeverSleep/Llama-3-Lumimaid-8B-v0.1
- model: ResplendentAI/SOVL_Llama3_8B
parameters:
density: 0.58
weight: [0.15, 0.1, 0.1, 0.33]
- model: Endevor/InfinityRP-v2-8B
parameters:
density: 0.58
weight: [0.15, 0.1, 0.1, 0.25]
- model: ChaoticNeutrals/Poppy_Porpoise-0.72-L3-8B
parameters:
density: 0.66
weight: [0.20, 0.35, 0.25, 0.25]
- model: cgato/L3-TheSpice-8b-v0.8.3
parameters:
density: 0.66
weight: [0.20, 0.15, 0.15, 0.25]
merge_method: dare_ties
base_model: NeverSleep/Llama-3-Lumimaid-8B-v0.1
parameters:
normalize: true
int8_mask: true
dtype: bfloat16
```
| {"license": "other", "library_name": "transformers", "tags": ["mergekit", "merge", "facebook", "meta", "pytorch", "llama", "llama-3"], "base_model": ["ResplendentAI/SOVL_Llama3_8B", "ChaoticNeutrals/Poppy_Porpoise-0.72-L3-8B", "NeverSleep/Llama-3-Lumimaid-8B-v0.1", "cgato/L3-TheSpice-8b-v0.8.3", "Endevor/InfinityRP-v2-8B"], "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "LICENSE"} | Virt-io/Llama-3-8B-Irene-v0.1 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"facebook",
"meta",
"pytorch",
"llama-3",
"conversational",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:ResplendentAI/SOVL_Llama3_8B",
"base_model:ChaoticNeutrals/Poppy_Porpoise-0.72-L3-8B",
"base_model:NeverSleep/Llama-3-Lumimaid-8B-v0.1",
"base_model:cgato/L3-TheSpice-8b-v0.8.3",
"base_model:Endevor/InfinityRP-v2-8B",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:04:05+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["unsloth"]} | animaRegem/gemma-2b-it-lora-0_1-malayalam-tokenizer | null | [
"transformers",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:04:11+00:00 |
text-generation | transformers | {} | sajaw/Verifiedllama-2-7b-Generated10K_3epochs | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:04:41+00:00 |
|
null | diffusers | {} | xkronosx/train_prior_ddpm-celeb-128 | null | [
"diffusers",
"safetensors",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2024-05-03T16:05:08+00:00 |
|
text-generation | transformers | # Gille/StrangeMerges_16-7B-slerp AWQ
- Model creator: [Gille](https://huggingface.co/Gille)
- Original model: [StrangeMerges_16-7B-slerp](https://huggingface.co/Gille/StrangeMerges_16-7B-slerp)
## How to use
### Install the necessary packages
```bash
pip install --upgrade autoawq autoawq-kernels
```
### Example Python code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer
model_path = "solidrust/StrangeMerges_16-7B-slerp-AWQ"
system_message = "You are StrangeMerges_16-7B-slerp, incarnated as a powerful AI. You were created by Gille."
# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
trust_remote_code=True)
streamer = TextStreamer(tokenizer,
skip_prompt=True,
skip_special_tokens=True)
# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""
prompt = "You're standing on the surface of the Earth. "\
"You walk one mile south, one mile west and one mile north. "\
"You end up exactly where you started. Where are you?"
tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
return_tensors='pt').input_ids.cuda()
# Generate output
generation_output = model.generate(tokens,
streamer=streamer,
max_new_tokens=512)
```
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
| {"library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"} | solidrust/StrangeMerges_16-7B-slerp-AWQ | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:05:18+00:00 |
null | null | {} | largenumber/Test3 | null | [
"region:us"
] | null | 2024-05-03T16:05:58+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lora_fine_tuned_copa
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6918
- Accuracy: 0.46
- F1: 0.4570
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7088 | 1.0 | 50 | 0.6921 | 0.48 | 0.48 |
| 0.7024 | 2.0 | 100 | 0.6922 | 0.49 | 0.4894 |
| 0.6993 | 3.0 | 150 | 0.6921 | 0.46 | 0.4587 |
| 0.7005 | 4.0 | 200 | 0.6920 | 0.48 | 0.4788 |
| 0.6989 | 5.0 | 250 | 0.6919 | 0.47 | 0.4679 |
| 0.7018 | 6.0 | 300 | 0.6919 | 0.46 | 0.4570 |
| 0.6943 | 7.0 | 350 | 0.6919 | 0.46 | 0.4570 |
| 0.6943 | 8.0 | 400 | 0.6918 | 0.46 | 0.4570 |
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.1
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "google-bert/bert-base-uncased", "model-index": [{"name": "lora_fine_tuned_copa", "results": []}]} | lenatr99/lora_fine_tuned_copa | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2024-05-03T16:08:08+00:00 |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3247
- Rouge1: 0.1978
- Rouge2: 0.099
- Rougel: 0.1684
- Rougelsum: 0.1682
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.3922 | 0.1587 | 0.0648 | 0.1323 | 0.1323 | 19.0 |
| No log | 2.0 | 124 | 2.3515 | 0.1894 | 0.0903 | 0.1611 | 0.1607 | 19.0 |
| No log | 3.0 | 186 | 2.3310 | 0.1968 | 0.0983 | 0.1679 | 0.1675 | 19.0 |
| No log | 4.0 | 248 | 2.3247 | 0.1978 | 0.099 | 0.1684 | 0.1682 | 19.0 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "t5-small", "model-index": [{"name": "my_awesome_billsum_model", "results": []}]} | ngthanhlong089/my_awesome_billsum_model | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:08:12+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | golf2248/art9e97 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:08:35+00:00 |
null | null | {} | ntmerl/jojoDiffusion | null | [
"region:us"
] | null | 2024-05-03T16:09:42+00:00 |
|
null | null | {} | descansodj/Phi-3-mini-128k-instruct-dpo | null | [
"region:us"
] | null | 2024-05-03T16:10:14+00:00 |
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | cilantro9246/mkon0fy | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:11:04+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta_mrqa_v2
This model is a fine-tuned version of [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3580
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1229 | 1.0 | 967 | 1.3499 |
| 1.1979 | 2.0 | 1934 | 1.3192 |
| 0.9852 | 3.0 | 2901 | 1.3580 |
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "FacebookAI/roberta-base", "model-index": [{"name": "roberta_mrqa_v2", "results": []}]} | enriquesaou/roberta_mrqa_v2 | null | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:11:49+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | jeongmi/solar_text | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:12:37+00:00 |
null | null | {} | vup2p/model_sn25_45 | null | [
"region:us"
] | null | 2024-05-03T16:12:48+00:00 |
|
text2text-generation | transformers | {} | mika5883/pretrain_rugec | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:12:51+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3-seqsight_65536_512_47M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3391
- F1 Score: 0.8676
- Accuracy: 0.8677
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.4898 | 2.13 | 200 | 0.4183 | 0.8196 | 0.8196 |
| 0.3827 | 4.26 | 400 | 0.4199 | 0.8255 | 0.8263 |
| 0.3609 | 6.38 | 600 | 0.4122 | 0.8241 | 0.8250 |
| 0.3487 | 8.51 | 800 | 0.3864 | 0.8301 | 0.8303 |
| 0.3376 | 10.64 | 1000 | 0.3927 | 0.8282 | 0.8290 |
| 0.3246 | 12.77 | 1200 | 0.3862 | 0.8331 | 0.8337 |
| 0.3197 | 14.89 | 1400 | 0.3668 | 0.8402 | 0.8403 |
| 0.3126 | 17.02 | 1600 | 0.3676 | 0.8407 | 0.8410 |
| 0.3023 | 19.15 | 1800 | 0.4137 | 0.8265 | 0.8277 |
| 0.299 | 21.28 | 2000 | 0.3796 | 0.8378 | 0.8383 |
| 0.2968 | 23.4 | 2200 | 0.3519 | 0.8523 | 0.8524 |
| 0.2882 | 25.53 | 2400 | 0.3784 | 0.8386 | 0.8390 |
| 0.2879 | 27.66 | 2600 | 0.3634 | 0.8487 | 0.8490 |
| 0.2888 | 29.79 | 2800 | 0.3759 | 0.8458 | 0.8464 |
| 0.2845 | 31.91 | 3000 | 0.3722 | 0.8452 | 0.8457 |
| 0.2801 | 34.04 | 3200 | 0.3733 | 0.8446 | 0.8450 |
| 0.2778 | 36.17 | 3400 | 0.3864 | 0.8438 | 0.8444 |
| 0.2752 | 38.3 | 3600 | 0.3656 | 0.8568 | 0.8570 |
| 0.2773 | 40.43 | 3800 | 0.3683 | 0.8527 | 0.8530 |
| 0.2727 | 42.55 | 4000 | 0.3568 | 0.8576 | 0.8577 |
| 0.2711 | 44.68 | 4200 | 0.3499 | 0.8596 | 0.8597 |
| 0.2708 | 46.81 | 4400 | 0.3751 | 0.8479 | 0.8484 |
| 0.2718 | 48.94 | 4600 | 0.3834 | 0.8463 | 0.8470 |
| 0.2679 | 51.06 | 4800 | 0.3602 | 0.8568 | 0.8570 |
| 0.2659 | 53.19 | 5000 | 0.3933 | 0.8455 | 0.8464 |
| 0.2692 | 55.32 | 5200 | 0.3556 | 0.8555 | 0.8557 |
| 0.2665 | 57.45 | 5400 | 0.3706 | 0.8508 | 0.8510 |
| 0.2626 | 59.57 | 5600 | 0.3638 | 0.8568 | 0.8570 |
| 0.2672 | 61.7 | 5800 | 0.3498 | 0.8589 | 0.8591 |
| 0.2595 | 63.83 | 6000 | 0.3725 | 0.8546 | 0.8550 |
| 0.2637 | 65.96 | 6200 | 0.3743 | 0.8580 | 0.8584 |
| 0.2606 | 68.09 | 6400 | 0.3801 | 0.8559 | 0.8564 |
| 0.2586 | 70.21 | 6600 | 0.3770 | 0.8546 | 0.8550 |
| 0.2613 | 72.34 | 6800 | 0.3648 | 0.8547 | 0.8550 |
| 0.2608 | 74.47 | 7000 | 0.3994 | 0.8434 | 0.8444 |
| 0.2586 | 76.6 | 7200 | 0.3739 | 0.8547 | 0.8550 |
| 0.2612 | 78.72 | 7400 | 0.3657 | 0.8567 | 0.8570 |
| 0.2575 | 80.85 | 7600 | 0.3624 | 0.8554 | 0.8557 |
| 0.2573 | 82.98 | 7800 | 0.3695 | 0.8573 | 0.8577 |
| 0.2579 | 85.11 | 8000 | 0.3598 | 0.8574 | 0.8577 |
| 0.2579 | 87.23 | 8200 | 0.3569 | 0.8595 | 0.8597 |
| 0.253 | 89.36 | 8400 | 0.3685 | 0.8534 | 0.8537 |
| 0.2559 | 91.49 | 8600 | 0.3750 | 0.8547 | 0.8550 |
| 0.2566 | 93.62 | 8800 | 0.3699 | 0.8553 | 0.8557 |
| 0.2546 | 95.74 | 9000 | 0.3729 | 0.8560 | 0.8564 |
| 0.2524 | 97.87 | 9200 | 0.3649 | 0.8567 | 0.8570 |
| 0.2533 | 100.0 | 9400 | 0.3632 | 0.8568 | 0.8570 |
| 0.2557 | 102.13 | 9600 | 0.3663 | 0.8573 | 0.8577 |
| 0.2527 | 104.26 | 9800 | 0.3654 | 0.8567 | 0.8570 |
| 0.2531 | 106.38 | 10000 | 0.3667 | 0.8573 | 0.8577 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H3-seqsight_65536_512_47M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3-seqsight_65536_512_47M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:12:53+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3-seqsight_65536_512_47M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3253
- F1 Score: 0.8696
- Accuracy: 0.8697
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.455 | 2.13 | 200 | 0.3926 | 0.8357 | 0.8357 |
| 0.3495 | 4.26 | 400 | 0.4119 | 0.8128 | 0.8143 |
| 0.3143 | 6.38 | 600 | 0.4219 | 0.8202 | 0.8216 |
| 0.2963 | 8.51 | 800 | 0.3801 | 0.8343 | 0.8350 |
| 0.2842 | 10.64 | 1000 | 0.3831 | 0.8389 | 0.8397 |
| 0.2751 | 12.77 | 1200 | 0.3779 | 0.8493 | 0.8497 |
| 0.2711 | 14.89 | 1400 | 0.3792 | 0.8452 | 0.8457 |
| 0.2664 | 17.02 | 1600 | 0.3620 | 0.8549 | 0.8550 |
| 0.2571 | 19.15 | 1800 | 0.4008 | 0.8409 | 0.8417 |
| 0.2527 | 21.28 | 2000 | 0.3837 | 0.8473 | 0.8477 |
| 0.2544 | 23.4 | 2200 | 0.3505 | 0.8595 | 0.8597 |
| 0.2405 | 25.53 | 2400 | 0.4114 | 0.8446 | 0.8450 |
| 0.247 | 27.66 | 2600 | 0.3592 | 0.8628 | 0.8631 |
| 0.2449 | 29.79 | 2800 | 0.3576 | 0.8554 | 0.8557 |
| 0.2392 | 31.91 | 3000 | 0.3568 | 0.8624 | 0.8624 |
| 0.2344 | 34.04 | 3200 | 0.3470 | 0.8670 | 0.8671 |
| 0.2327 | 36.17 | 3400 | 0.3992 | 0.8511 | 0.8517 |
| 0.2284 | 38.3 | 3600 | 0.3705 | 0.8622 | 0.8624 |
| 0.2305 | 40.43 | 3800 | 0.3835 | 0.8572 | 0.8577 |
| 0.2239 | 42.55 | 4000 | 0.3739 | 0.8556 | 0.8557 |
| 0.2211 | 44.68 | 4200 | 0.3594 | 0.8651 | 0.8651 |
| 0.2194 | 46.81 | 4400 | 0.3890 | 0.8541 | 0.8544 |
| 0.2202 | 48.94 | 4600 | 0.3940 | 0.8511 | 0.8517 |
| 0.215 | 51.06 | 4800 | 0.3617 | 0.8610 | 0.8611 |
| 0.2124 | 53.19 | 5000 | 0.4273 | 0.8477 | 0.8484 |
| 0.2171 | 55.32 | 5200 | 0.3822 | 0.8595 | 0.8597 |
| 0.2099 | 57.45 | 5400 | 0.3963 | 0.8508 | 0.8510 |
| 0.2073 | 59.57 | 5600 | 0.3901 | 0.8549 | 0.8550 |
| 0.2075 | 61.7 | 5800 | 0.3635 | 0.8651 | 0.8651 |
| 0.2038 | 63.83 | 6000 | 0.4093 | 0.8520 | 0.8524 |
| 0.2061 | 65.96 | 6200 | 0.4333 | 0.8450 | 0.8457 |
| 0.2036 | 68.09 | 6400 | 0.4143 | 0.8540 | 0.8544 |
| 0.2006 | 70.21 | 6600 | 0.4012 | 0.8589 | 0.8591 |
| 0.2009 | 72.34 | 6800 | 0.3996 | 0.8588 | 0.8591 |
| 0.1995 | 74.47 | 7000 | 0.4453 | 0.8435 | 0.8444 |
| 0.1969 | 76.6 | 7200 | 0.3989 | 0.8541 | 0.8544 |
| 0.2014 | 78.72 | 7400 | 0.3923 | 0.8543 | 0.8544 |
| 0.199 | 80.85 | 7600 | 0.4155 | 0.8495 | 0.8497 |
| 0.1962 | 82.98 | 7800 | 0.4144 | 0.8549 | 0.8550 |
| 0.1938 | 85.11 | 8000 | 0.3919 | 0.8589 | 0.8591 |
| 0.1919 | 87.23 | 8200 | 0.4019 | 0.8523 | 0.8524 |
| 0.1905 | 89.36 | 8400 | 0.4164 | 0.8530 | 0.8530 |
| 0.1898 | 91.49 | 8600 | 0.4227 | 0.8541 | 0.8544 |
| 0.1908 | 93.62 | 8800 | 0.4182 | 0.8548 | 0.8550 |
| 0.1906 | 95.74 | 9000 | 0.4258 | 0.8514 | 0.8517 |
| 0.188 | 97.87 | 9200 | 0.4151 | 0.8555 | 0.8557 |
| 0.1886 | 100.0 | 9400 | 0.4124 | 0.8543 | 0.8544 |
| 0.1891 | 102.13 | 9600 | 0.4163 | 0.8522 | 0.8524 |
| 0.1867 | 104.26 | 9800 | 0.4136 | 0.8536 | 0.8537 |
| 0.1866 | 106.38 | 10000 | 0.4139 | 0.8542 | 0.8544 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H3-seqsight_65536_512_47M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3-seqsight_65536_512_47M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:12:54+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3-seqsight_65536_512_47M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3472
- F1 Score: 0.8683
- Accuracy: 0.8684
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.4364 | 2.13 | 200 | 0.3952 | 0.8232 | 0.8236 |
| 0.3132 | 4.26 | 400 | 0.3627 | 0.8461 | 0.8464 |
| 0.2868 | 6.38 | 600 | 0.4096 | 0.8341 | 0.8350 |
| 0.275 | 8.51 | 800 | 0.3467 | 0.8561 | 0.8564 |
| 0.2648 | 10.64 | 1000 | 0.3499 | 0.8561 | 0.8564 |
| 0.2538 | 12.77 | 1200 | 0.3487 | 0.8589 | 0.8591 |
| 0.2482 | 14.89 | 1400 | 0.4163 | 0.8368 | 0.8377 |
| 0.2415 | 17.02 | 1600 | 0.3507 | 0.8671 | 0.8671 |
| 0.2295 | 19.15 | 1800 | 0.3938 | 0.8472 | 0.8477 |
| 0.2222 | 21.28 | 2000 | 0.3751 | 0.8637 | 0.8637 |
| 0.2213 | 23.4 | 2200 | 0.3761 | 0.8635 | 0.8637 |
| 0.2024 | 25.53 | 2400 | 0.4104 | 0.8596 | 0.8597 |
| 0.2089 | 27.66 | 2600 | 0.4076 | 0.8607 | 0.8611 |
| 0.198 | 29.79 | 2800 | 0.4025 | 0.8620 | 0.8624 |
| 0.1919 | 31.91 | 3000 | 0.4010 | 0.8570 | 0.8570 |
| 0.1828 | 34.04 | 3200 | 0.3662 | 0.8677 | 0.8677 |
| 0.1758 | 36.17 | 3400 | 0.4293 | 0.8629 | 0.8631 |
| 0.1685 | 38.3 | 3600 | 0.4412 | 0.8574 | 0.8577 |
| 0.1667 | 40.43 | 3800 | 0.4562 | 0.8585 | 0.8591 |
| 0.1578 | 42.55 | 4000 | 0.4707 | 0.8549 | 0.8550 |
| 0.1513 | 44.68 | 4200 | 0.4599 | 0.8575 | 0.8577 |
| 0.1463 | 46.81 | 4400 | 0.4876 | 0.8582 | 0.8584 |
| 0.1431 | 48.94 | 4600 | 0.5163 | 0.8484 | 0.8490 |
| 0.1349 | 51.06 | 4800 | 0.4653 | 0.8635 | 0.8637 |
| 0.1296 | 53.19 | 5000 | 0.5254 | 0.8593 | 0.8597 |
| 0.1278 | 55.32 | 5200 | 0.5322 | 0.8526 | 0.8530 |
| 0.1199 | 57.45 | 5400 | 0.5515 | 0.8533 | 0.8537 |
| 0.117 | 59.57 | 5600 | 0.5362 | 0.8581 | 0.8584 |
| 0.1133 | 61.7 | 5800 | 0.4982 | 0.8569 | 0.8570 |
| 0.1118 | 63.83 | 6000 | 0.5740 | 0.8453 | 0.8457 |
| 0.11 | 65.96 | 6200 | 0.6111 | 0.8408 | 0.8417 |
| 0.1016 | 68.09 | 6400 | 0.6034 | 0.8464 | 0.8470 |
| 0.1013 | 70.21 | 6600 | 0.5935 | 0.8506 | 0.8510 |
| 0.0944 | 72.34 | 6800 | 0.5933 | 0.8560 | 0.8564 |
| 0.0944 | 74.47 | 7000 | 0.6320 | 0.8443 | 0.8450 |
| 0.09 | 76.6 | 7200 | 0.6099 | 0.8539 | 0.8544 |
| 0.0905 | 78.72 | 7400 | 0.6381 | 0.8525 | 0.8530 |
| 0.093 | 80.85 | 7600 | 0.6642 | 0.8466 | 0.8470 |
| 0.0888 | 82.98 | 7800 | 0.6228 | 0.8553 | 0.8557 |
| 0.0854 | 85.11 | 8000 | 0.6298 | 0.8526 | 0.8530 |
| 0.0839 | 87.23 | 8200 | 0.6514 | 0.8498 | 0.8504 |
| 0.0789 | 89.36 | 8400 | 0.6437 | 0.8515 | 0.8517 |
| 0.0796 | 91.49 | 8600 | 0.6850 | 0.8465 | 0.8470 |
| 0.0812 | 93.62 | 8800 | 0.6603 | 0.8466 | 0.8470 |
| 0.0816 | 95.74 | 9000 | 0.6931 | 0.8450 | 0.8457 |
| 0.0772 | 97.87 | 9200 | 0.6764 | 0.8478 | 0.8484 |
| 0.075 | 100.0 | 9400 | 0.6582 | 0.8500 | 0.8504 |
| 0.0772 | 102.13 | 9600 | 0.6674 | 0.8493 | 0.8497 |
| 0.0715 | 104.26 | 9800 | 0.6791 | 0.8452 | 0.8457 |
| 0.0758 | 106.38 | 10000 | 0.6793 | 0.8465 | 0.8470 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H3-seqsight_65536_512_47M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3-seqsight_65536_512_47M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:12:56+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4ac-seqsight_65536_512_47M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H4ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5702
- F1 Score: 0.7087
- Accuracy: 0.7085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6417 | 0.93 | 200 | 0.5973 | 0.6815 | 0.6812 |
| 0.6039 | 1.87 | 400 | 0.6004 | 0.6859 | 0.6871 |
| 0.5916 | 2.8 | 600 | 0.5816 | 0.7040 | 0.7038 |
| 0.5863 | 3.74 | 800 | 0.5817 | 0.7044 | 0.7041 |
| 0.5795 | 4.67 | 1000 | 0.5882 | 0.7058 | 0.7062 |
| 0.5751 | 5.61 | 1200 | 0.5956 | 0.7051 | 0.7067 |
| 0.5732 | 6.54 | 1400 | 0.5767 | 0.7128 | 0.7126 |
| 0.5653 | 7.48 | 1600 | 0.5786 | 0.7123 | 0.7120 |
| 0.5723 | 8.41 | 1800 | 0.5774 | 0.7119 | 0.7117 |
| 0.5682 | 9.35 | 2000 | 0.5854 | 0.7109 | 0.7117 |
| 0.5614 | 10.28 | 2200 | 0.5768 | 0.7124 | 0.7123 |
| 0.5653 | 11.21 | 2400 | 0.5738 | 0.7158 | 0.7158 |
| 0.5605 | 12.15 | 2600 | 0.5763 | 0.7136 | 0.7138 |
| 0.559 | 13.08 | 2800 | 0.5887 | 0.7114 | 0.7126 |
| 0.5598 | 14.02 | 3000 | 0.5760 | 0.7146 | 0.7150 |
| 0.5565 | 14.95 | 3200 | 0.5703 | 0.7176 | 0.7176 |
| 0.5541 | 15.89 | 3400 | 0.5891 | 0.7101 | 0.7120 |
| 0.552 | 16.82 | 3600 | 0.5692 | 0.7192 | 0.7191 |
| 0.5579 | 17.76 | 3800 | 0.5672 | 0.7212 | 0.7211 |
| 0.5528 | 18.69 | 4000 | 0.5698 | 0.7187 | 0.7188 |
| 0.5492 | 19.63 | 4200 | 0.5783 | 0.7161 | 0.7170 |
| 0.5525 | 20.56 | 4400 | 0.5653 | 0.7226 | 0.7226 |
| 0.5496 | 21.5 | 4600 | 0.5951 | 0.7070 | 0.7103 |
| 0.5495 | 22.43 | 4800 | 0.5678 | 0.7221 | 0.7223 |
| 0.5521 | 23.36 | 5000 | 0.5792 | 0.7182 | 0.7196 |
| 0.5458 | 24.3 | 5200 | 0.5668 | 0.7237 | 0.7238 |
| 0.5497 | 25.23 | 5400 | 0.5603 | 0.7257 | 0.7255 |
| 0.5482 | 26.17 | 5600 | 0.5680 | 0.7232 | 0.7235 |
| 0.5479 | 27.1 | 5800 | 0.5718 | 0.7214 | 0.7223 |
| 0.5439 | 28.04 | 6000 | 0.5623 | 0.7295 | 0.7293 |
| 0.5477 | 28.97 | 6200 | 0.5758 | 0.7186 | 0.7196 |
| 0.5463 | 29.91 | 6400 | 0.5683 | 0.7237 | 0.7240 |
| 0.5461 | 30.84 | 6600 | 0.5867 | 0.7164 | 0.7185 |
| 0.5448 | 31.78 | 6800 | 0.5662 | 0.7250 | 0.7252 |
| 0.5426 | 32.71 | 7000 | 0.5676 | 0.7240 | 0.7243 |
| 0.5419 | 33.64 | 7200 | 0.5682 | 0.7239 | 0.7246 |
| 0.5439 | 34.58 | 7400 | 0.5696 | 0.7216 | 0.7223 |
| 0.5425 | 35.51 | 7600 | 0.5626 | 0.7284 | 0.7284 |
| 0.5385 | 36.45 | 7800 | 0.5638 | 0.7287 | 0.7287 |
| 0.5443 | 37.38 | 8000 | 0.5762 | 0.7198 | 0.7211 |
| 0.5399 | 38.32 | 8200 | 0.5670 | 0.7270 | 0.7276 |
| 0.5409 | 39.25 | 8400 | 0.5653 | 0.7284 | 0.7287 |
| 0.5439 | 40.19 | 8600 | 0.5633 | 0.7277 | 0.7279 |
| 0.5406 | 41.12 | 8800 | 0.5669 | 0.7262 | 0.7267 |
| 0.5393 | 42.06 | 9000 | 0.5684 | 0.7268 | 0.7273 |
| 0.543 | 42.99 | 9200 | 0.5738 | 0.7209 | 0.7220 |
| 0.5384 | 43.93 | 9400 | 0.5725 | 0.7238 | 0.7246 |
| 0.5406 | 44.86 | 9600 | 0.5664 | 0.7266 | 0.7270 |
| 0.542 | 45.79 | 9800 | 0.5679 | 0.7259 | 0.7264 |
| 0.5386 | 46.73 | 10000 | 0.5694 | 0.7248 | 0.7255 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H4ac-seqsight_65536_512_47M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4ac-seqsight_65536_512_47M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:13:19+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Base Noise Ko - Dearlie
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Noise Data dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0157
- Cer: 41.4126
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 1.3649 | 0.8780 | 1000 | 1.3839 | 54.5000 |
| 1.0173 | 1.7559 | 2000 | 1.1473 | 52.0300 |
| 0.7373 | 2.6339 | 3000 | 1.0454 | 43.7234 |
| 0.5197 | 3.5119 | 4000 | 1.0157 | 41.4126 |
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"language": ["ko"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["AIHub/noise"], "base_model": "openai/whisper-base", "model-index": [{"name": "Whisper Base Noise Ko - Dearlie", "results": []}]} | Dearlie/whisper-noise3 | null | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ko",
"dataset:AIHub/noise",
"base_model:openai/whisper-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:13:31+00:00 |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BioGPT_DocBot_SonatafyAI_V1
This model is a fine-tuned version of [microsoft/biogpt](https://huggingface.co/microsoft/biogpt) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8762
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.1291 | 1.0 | 1109 | 2.9973 |
| 2.8675 | 2.0 | 2218 | 2.9057 |
| 2.7264 | 3.0 | 3327 | 2.8822 |
| 2.6095 | 4.0 | 4436 | 2.8706 |
| 2.548 | 5.0 | 5545 | 2.8762 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/biogpt", "model-index": [{"name": "BioGPT_DocBot_SonatafyAI_V1", "results": []}]} | Sonatafyai/BioGPT_DocBot_SonatafyAI_V1 | null | [
"transformers",
"tensorboard",
"safetensors",
"biogpt",
"text-generation",
"generated_from_trainer",
"base_model:microsoft/biogpt",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:13:46+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4ac-seqsight_65536_512_47M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H4ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5638
- F1 Score: 0.7186
- Accuracy: 0.7185
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6275 | 0.93 | 200 | 0.5856 | 0.6927 | 0.6924 |
| 0.589 | 1.87 | 400 | 0.5897 | 0.7023 | 0.7032 |
| 0.5744 | 2.8 | 600 | 0.5709 | 0.7072 | 0.7070 |
| 0.5693 | 3.74 | 800 | 0.5721 | 0.7129 | 0.7126 |
| 0.5626 | 4.67 | 1000 | 0.5702 | 0.7176 | 0.7173 |
| 0.5563 | 5.61 | 1200 | 0.5767 | 0.7101 | 0.7109 |
| 0.5549 | 6.54 | 1400 | 0.5737 | 0.7128 | 0.7135 |
| 0.5438 | 7.48 | 1600 | 0.5655 | 0.7219 | 0.7217 |
| 0.5507 | 8.41 | 1800 | 0.5624 | 0.7231 | 0.7229 |
| 0.5442 | 9.35 | 2000 | 0.5599 | 0.7237 | 0.7235 |
| 0.5376 | 10.28 | 2200 | 0.5564 | 0.7277 | 0.7276 |
| 0.5401 | 11.21 | 2400 | 0.5580 | 0.7302 | 0.7299 |
| 0.534 | 12.15 | 2600 | 0.5662 | 0.7264 | 0.7267 |
| 0.5312 | 13.08 | 2800 | 0.5639 | 0.7289 | 0.7287 |
| 0.5321 | 14.02 | 3000 | 0.5519 | 0.7270 | 0.7267 |
| 0.5281 | 14.95 | 3200 | 0.5523 | 0.7317 | 0.7314 |
| 0.5244 | 15.89 | 3400 | 0.5527 | 0.7343 | 0.7340 |
| 0.5211 | 16.82 | 3600 | 0.5606 | 0.7300 | 0.7299 |
| 0.5255 | 17.76 | 3800 | 0.5725 | 0.7209 | 0.7220 |
| 0.522 | 18.69 | 4000 | 0.5527 | 0.7327 | 0.7326 |
| 0.5155 | 19.63 | 4200 | 0.5498 | 0.7376 | 0.7372 |
| 0.5197 | 20.56 | 4400 | 0.5515 | 0.7346 | 0.7343 |
| 0.5178 | 21.5 | 4600 | 0.5649 | 0.7226 | 0.7235 |
| 0.5147 | 22.43 | 4800 | 0.5514 | 0.7370 | 0.7367 |
| 0.5162 | 23.36 | 5000 | 0.5602 | 0.7330 | 0.7331 |
| 0.5122 | 24.3 | 5200 | 0.5509 | 0.7375 | 0.7372 |
| 0.5132 | 25.23 | 5400 | 0.5460 | 0.7381 | 0.7378 |
| 0.5129 | 26.17 | 5600 | 0.5504 | 0.7370 | 0.7367 |
| 0.5098 | 27.1 | 5800 | 0.5520 | 0.7352 | 0.7349 |
| 0.5072 | 28.04 | 6000 | 0.5511 | 0.7390 | 0.7387 |
| 0.5102 | 28.97 | 6200 | 0.5572 | 0.7351 | 0.7349 |
| 0.5079 | 29.91 | 6400 | 0.5584 | 0.7315 | 0.7314 |
| 0.5078 | 30.84 | 6600 | 0.5681 | 0.7282 | 0.7287 |
| 0.5028 | 31.78 | 6800 | 0.5553 | 0.7366 | 0.7364 |
| 0.5059 | 32.71 | 7000 | 0.5563 | 0.7352 | 0.7349 |
| 0.5042 | 33.64 | 7200 | 0.5569 | 0.7308 | 0.7311 |
| 0.5027 | 34.58 | 7400 | 0.5531 | 0.7366 | 0.7364 |
| 0.5031 | 35.51 | 7600 | 0.5531 | 0.7364 | 0.7361 |
| 0.499 | 36.45 | 7800 | 0.5564 | 0.7363 | 0.7361 |
| 0.5026 | 37.38 | 8000 | 0.5586 | 0.7339 | 0.7340 |
| 0.5005 | 38.32 | 8200 | 0.5524 | 0.7357 | 0.7355 |
| 0.4999 | 39.25 | 8400 | 0.5557 | 0.7345 | 0.7343 |
| 0.5014 | 40.19 | 8600 | 0.5566 | 0.7355 | 0.7352 |
| 0.4998 | 41.12 | 8800 | 0.5579 | 0.7347 | 0.7346 |
| 0.4986 | 42.06 | 9000 | 0.5580 | 0.7363 | 0.7361 |
| 0.5021 | 42.99 | 9200 | 0.5613 | 0.7319 | 0.7320 |
| 0.4965 | 43.93 | 9400 | 0.5597 | 0.7347 | 0.7346 |
| 0.4979 | 44.86 | 9600 | 0.5571 | 0.7357 | 0.7355 |
| 0.5001 | 45.79 | 9800 | 0.5572 | 0.7336 | 0.7334 |
| 0.4954 | 46.73 | 10000 | 0.5587 | 0.7353 | 0.7352 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H4ac-seqsight_65536_512_47M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4ac-seqsight_65536_512_47M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:14:32+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H4ac-seqsight_65536_512_47M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H4ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H4ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5976
- F1 Score: 0.7230
- Accuracy: 0.7229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6163 | 0.93 | 200 | 0.5941 | 0.6959 | 0.6965 |
| 0.5793 | 1.87 | 400 | 0.5846 | 0.7054 | 0.7062 |
| 0.5645 | 2.8 | 600 | 0.5619 | 0.7224 | 0.7223 |
| 0.5566 | 3.74 | 800 | 0.5643 | 0.7247 | 0.7243 |
| 0.5498 | 4.67 | 1000 | 0.5582 | 0.7258 | 0.7255 |
| 0.5412 | 5.61 | 1200 | 0.5765 | 0.7119 | 0.7132 |
| 0.5392 | 6.54 | 1400 | 0.5703 | 0.7175 | 0.7182 |
| 0.5263 | 7.48 | 1600 | 0.5556 | 0.7307 | 0.7305 |
| 0.5297 | 8.41 | 1800 | 0.5542 | 0.7281 | 0.7279 |
| 0.522 | 9.35 | 2000 | 0.5545 | 0.7337 | 0.7334 |
| 0.5143 | 10.28 | 2200 | 0.5502 | 0.7321 | 0.7320 |
| 0.5141 | 11.21 | 2400 | 0.5602 | 0.7309 | 0.7308 |
| 0.5068 | 12.15 | 2600 | 0.5658 | 0.7300 | 0.7302 |
| 0.5029 | 13.08 | 2800 | 0.5543 | 0.7323 | 0.7320 |
| 0.5013 | 14.02 | 3000 | 0.5646 | 0.7318 | 0.7317 |
| 0.4941 | 14.95 | 3200 | 0.5613 | 0.7361 | 0.7358 |
| 0.4883 | 15.89 | 3400 | 0.5586 | 0.7370 | 0.7367 |
| 0.4829 | 16.82 | 3600 | 0.5657 | 0.7337 | 0.7337 |
| 0.4853 | 17.76 | 3800 | 0.5990 | 0.7200 | 0.7220 |
| 0.479 | 18.69 | 4000 | 0.5616 | 0.7352 | 0.7349 |
| 0.4725 | 19.63 | 4200 | 0.5568 | 0.7369 | 0.7367 |
| 0.475 | 20.56 | 4400 | 0.5594 | 0.7378 | 0.7375 |
| 0.4682 | 21.5 | 4600 | 0.5767 | 0.7313 | 0.7317 |
| 0.4652 | 22.43 | 4800 | 0.5581 | 0.7384 | 0.7381 |
| 0.4614 | 23.36 | 5000 | 0.5728 | 0.7331 | 0.7331 |
| 0.4579 | 24.3 | 5200 | 0.5709 | 0.7407 | 0.7405 |
| 0.4564 | 25.23 | 5400 | 0.5619 | 0.7370 | 0.7367 |
| 0.4548 | 26.17 | 5600 | 0.5749 | 0.7372 | 0.7370 |
| 0.4519 | 27.1 | 5800 | 0.5706 | 0.7326 | 0.7323 |
| 0.4479 | 28.04 | 6000 | 0.5742 | 0.7396 | 0.7393 |
| 0.446 | 28.97 | 6200 | 0.5767 | 0.7378 | 0.7375 |
| 0.4445 | 29.91 | 6400 | 0.5753 | 0.7379 | 0.7378 |
| 0.4399 | 30.84 | 6600 | 0.5980 | 0.7372 | 0.7372 |
| 0.4351 | 31.78 | 6800 | 0.5851 | 0.7386 | 0.7384 |
| 0.4336 | 32.71 | 7000 | 0.5883 | 0.7352 | 0.7349 |
| 0.4319 | 33.64 | 7200 | 0.5929 | 0.7337 | 0.7340 |
| 0.4338 | 34.58 | 7400 | 0.5883 | 0.7369 | 0.7370 |
| 0.4277 | 35.51 | 7600 | 0.5873 | 0.7371 | 0.7370 |
| 0.424 | 36.45 | 7800 | 0.5890 | 0.7401 | 0.7399 |
| 0.4254 | 37.38 | 8000 | 0.5903 | 0.7411 | 0.7411 |
| 0.4238 | 38.32 | 8200 | 0.5865 | 0.7362 | 0.7361 |
| 0.4205 | 39.25 | 8400 | 0.5941 | 0.7387 | 0.7384 |
| 0.4205 | 40.19 | 8600 | 0.5969 | 0.7378 | 0.7375 |
| 0.4193 | 41.12 | 8800 | 0.5965 | 0.7401 | 0.7399 |
| 0.4182 | 42.06 | 9000 | 0.5959 | 0.7398 | 0.7396 |
| 0.4175 | 42.99 | 9200 | 0.6042 | 0.7358 | 0.7358 |
| 0.4139 | 43.93 | 9400 | 0.6035 | 0.7378 | 0.7378 |
| 0.4173 | 44.86 | 9600 | 0.5966 | 0.7403 | 0.7402 |
| 0.4187 | 45.79 | 9800 | 0.5986 | 0.7397 | 0.7396 |
| 0.4146 | 46.73 | 10000 | 0.5997 | 0.7376 | 0.7375 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H4ac-seqsight_65536_512_47M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H4ac-seqsight_65536_512_47M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:14:32+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lora_fine_tuned_boolq
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5547
- Accuracy: 0.7778
- F1: 0.6806
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|
| 0.6762 | 4.1667 | 50 | 0.5947 | 0.7778 | 0.6806 |
| 0.6639 | 8.3333 | 100 | 0.5719 | 0.7778 | 0.6806 |
| 0.6555 | 12.5 | 150 | 0.5648 | 0.7778 | 0.6806 |
| 0.6605 | 16.6667 | 200 | 0.5615 | 0.7778 | 0.6806 |
| 0.6612 | 20.8333 | 250 | 0.5568 | 0.7778 | 0.6806 |
| 0.6508 | 25.0 | 300 | 0.5567 | 0.7778 | 0.6806 |
| 0.6491 | 29.1667 | 350 | 0.5550 | 0.7778 | 0.6806 |
| 0.663 | 33.3333 | 400 | 0.5547 | 0.7778 | 0.6806 |
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.1
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "google-bert/bert-base-uncased", "model-index": [{"name": "lora_fine_tuned_boolq", "results": []}]} | lenatr99/lora_fine_tuned_boolq | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2024-05-03T16:14:40+00:00 |
text-to-image | diffusers |
# Juggernaut X Hyper + RunDiffusion Official (Community Version)


This model is not permitted to be used behind API services. Please contact [[email protected]](mailto:[email protected]) for business inquires, commercial licensing, custom models, and consultation.
Juggernaut X (SAFE) is available exclusivly on [RunDiffusion.com](http://rundiffusion.com/?utm_source=huggingface&utm_medium=referral&utm_campaign=juggernautv10hyper)
๐ Juggernaut X Hyper World Wide Release! ๐
After almost two months, we are thrilled to announce the next version of Juggernaut is ready to launch! Introducing Juggernaut X Hyper. ๐ If you would have been following us on Twitter (X) you would have been seeing the test images. If you aren't following us, do it now! https://x.com/RunDiffusion & Kandoo's new account needs some new followers. Help him out. https://x.com/Juggernaut_AI
- TWO Versions of Juggernaut X Hyper! SFW ๐ and NSFW ๐ฅ
- Fully trained from the ground up using the GPT4 Vision Captioning tool by LEOSAM ๐ ๏ธ
- Much improved prompt adherence โ
- Expanded and cleaner dataset with higher quality images ๐ผ๏ธ
- Improved classifications of shots (Full Body, Midshots, Portraits, etc) ๐ธ
- Enhanced text generation capability ๐
- Two different prompting techniques, Natural and Tagging style ๐ท๏ธ
- Enhanced by RunDiffusion Photo for refinement of details ๐ง
Read more about this version here https://rundiffusion.com/juggernaut-xl
Dual Version Release ๐
A Safe for Work (SFW) and a Not Safe for Work (NSFW) version of Juggernaut X Hyper will be available. This dual release strategy is designed to cater to diverse preferences and ensure inclusivity, offering the perfect solution for every user.
Our newest Safe for Work edition is available right now exclusively through Fooocus on RunDiffusion.com.
Launch Fooocus on RunDiffusion
Find Juggernaut X_RunDiffusion_Hyper.safetensors and start generating!
It allows users to generate high-quality, suitable images while adhering to safe content guidelines. This version is particularly user-friendly, requiring only simple, straightforward prompts. It's ideal for the workplace, students, educators, and families.
SAFE stands for Suitable Ai For Everyone. ๐
Conversely, the Not Safe for Work version offers unrestricted creative freedom across all categories and spectrums. This model is perfect for those seeking less constrained artistic expression and is available for free on Civitai.com, though a license is required for commercial use. ๐จ
Both models of Juggernaut X Hyper (v10) represent our commitment to fostering a creative community that respects diverse needs and preferences. ๐ค
Prompting Guide ๐
Because everything has been trained from the ground up, prompting is a bit different. (Simpler, don't worry) @Kandoo has created a guide to help you seamlessly integrate this powerful model into your workflow, enabling you to leverage its advanced capabilities without feeling overwhelmed. Download it here: https://rundiffusion.com/juggernaut-xl#nav
As always, we love our community and feel so lucky to be in this position to bring these awesome tools and models to you amazing diffusers. Thanks for supporting us since our first day back in 2022. Going on TWO YEARS since we first started using generative Ai. Time flies when you're having fun. wow!
Don't forget to follow us on Twitter where we have way more updates on big things we're working on. The future is bright
https://x.com/RunDiffusion
-RunDiffusion Team
 | {"language": ["en"], "license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["art", "people", "diffusion", "Cinematic", "Photography", "Landscape", "Interior", "Food", "Car", "Wildlife", "Architecture"], "thumbnail": "https://storage.googleapis.com/run-diffusion-public-assets/juggernaut-x/juggernaut-x-hyper-0-256.webp", "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "pipeline_tag": "text-to-image"} | RunDiffusion/Juggernaut-X-Hyper | null | [
"diffusers",
"art",
"people",
"diffusion",
"Cinematic",
"Photography",
"Landscape",
"Interior",
"Food",
"Car",
"Wildlife",
"Architecture",
"text-to-image",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | null | 2024-05-03T16:14:58+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny chinese - VingeNie
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 16.1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8984
- Cer Ortho: 82.5463
- Cer: 30.3139
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-08
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 25
- training_steps: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer Ortho | Cer |
|:-------------:|:------:|:----:|:---------------:|:---------:|:-------:|
| 0.2618 | 0.0400 | 50 | 0.8984 | 82.5463 | 30.3139 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.0.1+cu118
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"language": ["zh"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["mozilla-foundation/common_voice_16_1"], "base_model": "openai/whisper-tiny", "model-index": [{"name": "Whisper Tiny chinese - VingeNie", "results": []}]} | VingeNie/whisper-tiny-zh_CN_lr4_3600 | null | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"zh",
"dataset:mozilla-foundation/common_voice_16_1",
"base_model:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:15:03+00:00 |
null | null | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama-2-ko-7b - GGUF
- Model creator: https://huggingface.co/beomi/
- Original model: https://huggingface.co/beomi/llama-2-ko-7b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama-2-ko-7b.Q2_K.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.Q2_K.gguf) | Q2_K | 2.42GB |
| [llama-2-ko-7b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.IQ3_XS.gguf) | IQ3_XS | 2.67GB |
| [llama-2-ko-7b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.IQ3_S.gguf) | IQ3_S | 2.81GB |
| [llama-2-ko-7b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.Q3_K_S.gguf) | Q3_K_S | 2.81GB |
| [llama-2-ko-7b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.IQ3_M.gguf) | IQ3_M | 2.97GB |
| [llama-2-ko-7b.Q3_K.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.Q3_K.gguf) | Q3_K | 3.14GB |
| [llama-2-ko-7b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.Q3_K_M.gguf) | Q3_K_M | 3.14GB |
| [llama-2-ko-7b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.Q3_K_L.gguf) | Q3_K_L | 3.42GB |
| [llama-2-ko-7b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.IQ4_XS.gguf) | IQ4_XS | 3.47GB |
| [llama-2-ko-7b.Q4_0.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.Q4_0.gguf) | Q4_0 | 3.64GB |
| [llama-2-ko-7b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.IQ4_NL.gguf) | IQ4_NL | 3.66GB |
| [llama-2-ko-7b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.Q4_K_S.gguf) | Q4_K_S | 3.67GB |
| [llama-2-ko-7b.Q4_K.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.Q4_K.gguf) | Q4_K | 3.88GB |
| [llama-2-ko-7b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.Q4_K_M.gguf) | Q4_K_M | 3.88GB |
| [llama-2-ko-7b.Q4_1.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.Q4_1.gguf) | Q4_1 | 4.03GB |
| [llama-2-ko-7b.Q5_0.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.Q5_0.gguf) | Q5_0 | 4.42GB |
| [llama-2-ko-7b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.Q5_K_S.gguf) | Q5_K_S | 4.42GB |
| [llama-2-ko-7b.Q5_K.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.Q5_K.gguf) | Q5_K | 4.54GB |
| [llama-2-ko-7b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.Q5_K_M.gguf) | Q5_K_M | 4.54GB |
| [llama-2-ko-7b.Q5_1.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.Q5_1.gguf) | Q5_1 | 4.8GB |
| [llama-2-ko-7b.Q6_K.gguf](https://huggingface.co/RichardErkhov/beomi_-_llama-2-ko-7b-gguf/blob/main/llama-2-ko-7b.Q6_K.gguf) | Q6_K | 5.24GB |
Original model description:
---
language:
- en
- ko
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
- kollama
- llama-2-ko
---
**Update Log**
- 2023.12.27
- New Model is here! Trained with only open-accessible Korean text corpus: https://huggingface.co/beomi/open-llama-2-ko-7b
- 2023.10.19
- Fix Tokenizer bug(space not applied when decoding) after `transforemrs>=4.34.0`
# **Llama-2-Ko** ๐ฆ๐ฐ๐ท
Llama-2-Ko serves as an advanced iteration of Llama 2, benefiting from an expanded vocabulary and the inclusion of a Korean corpus in its further pretraining. Just like its predecessor, Llama-2-Ko operates within the broad range of generative text models that stretch from 7 billion to 70 billion parameters. This repository focuses on the 7B pretrained version, which is tailored to fit the Hugging Face Transformers format. For access to the other models, feel free to consult the index provided below.
## Model Details
**Model Developers** Junbum Lee (Beomi)
**Variations** Llama-2-Ko will come in a range of parameter sizes โ 7B, 13B, and 70B โ as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
Llama-2-Ko is an auto-regressive language model that uses an optimized transformer architecture based on Llama-2.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of Korean online data*|7B|4k|✗|>40B*|1e<sup>-5</sup>|
*Plan to train upto 200B tokens
**Vocab Expansion**
| Model Name | Vocabulary Size | Description |
| --- | --- | --- |
| Original Llama-2 | 32000 | Sentencepiece BPE |
| **Expanded Llama-2-Ko** | 46336 | Sentencepiece BPE. Added Korean vocab and merges |
**Tokenizing "์๋
ํ์ธ์, ์ค๋์ ๋ ์จ๊ฐ ์ข๋ค์."**
| Model | Tokens |
| --- | --- |
| Llama-2 | `['โ', '์', '<0xEB>', '<0x85>', '<0x95>', 'ํ', '์ธ', '์', ',', 'โ', '์ค', '<0xEB>', '<0x8A>', '<0x98>', '์', 'โ', '<0xEB>', '<0x82>', '<0xA0>', '์จ', '๊ฐ', 'โ', '<0xEC>', '<0xA2>', '<0x8B>', '<0xEB>', '<0x84>', '<0xA4>', '์']` |
| Llama-2-Ko | `['โ์๋
', 'ํ์ธ์', ',', 'โ์ค๋์', 'โ๋ ', '์จ๊ฐ', 'โ์ข๋ค์']` |
**Tokenizing "Llama 2: Open Foundation and Fine-Tuned Chat Models"**
| Model | Tokens |
| --- | --- |
| Llama-2 | `['โL', 'l', 'ama', 'โ', '2', ':', 'โOpen', 'โFoundation', 'โand', 'โFine', '-', 'T', 'un', 'ed', 'โCh', 'at', 'โMod', 'els']` |
| Llama-2-Ko | `['โL', 'l', 'ama', 'โ', '2', ':', 'โOpen', 'โFoundation', 'โand', 'โFine', '-', 'T', 'un', 'ed', 'โCh', 'at', 'โMod', 'els']` |
# **Model Benchmark**
## LM Eval Harness - Korean (polyglot branch)
- Used EleutherAI's lm-evaluation-harness https://github.com/EleutherAI/lm-evaluation-harness/tree/polyglot
### NSMC (Acc) - 50000 full test
TBD
### COPA (F1)
<img src=https://user-images.githubusercontent.com/11323660/255575809-c037bc6e-0566-436a-a6c1-2329ac92187a.png style="max-width: 700px; width: 100%" />
| Model | 0-shot | 5-shot | 10-shot | 50-shot |
| --- | --- | --- | --- | --- |
| https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5 | 0.6696 | 0.6477 | 0.6419 | 0.6514 |
| https://huggingface.co/kakaobrain/kogpt | 0.7345 | 0.7287 | 0.7277 | 0.7479 |
| https://huggingface.co/facebook/xglm-7.5B | 0.6723 | 0.6731 | 0.6769 | 0.7119 |
| https://huggingface.co/EleutherAI/polyglot-ko-1.3b | 0.7196 | 0.7193 | 0.7204 | 0.7206 |
| https://huggingface.co/EleutherAI/polyglot-ko-3.8b | 0.7595 | 0.7608 | 0.7638 | 0.7788 |
| https://huggingface.co/EleutherAI/polyglot-ko-5.8b | 0.7745 | 0.7676 | 0.7775 | 0.7887 |
| https://huggingface.co/EleutherAI/polyglot-ko-12.8b | 0.7937 | 0.8108 | 0.8037 | 0.8369 |
| Llama-2 Original 7B* | 0.562033 | 0.575982 | 0.576216 | 0.595532 |
| Llama-2-Ko-7b 20B (10k) | 0.738780 | 0.762639 | 0.780761 | 0.797863 |
| Llama-2-Ko-7b 40B (20k) | 0.743630 | 0.792716 | 0.803746 | 0.825944 |
*Llama-2 Original 7B used https://huggingface.co/meta-llama/Llama-2-7b-hf (w/o tokenizer updated)
### HellaSwag (F1)
<img src=https://user-images.githubusercontent.com/11323660/255576090-a2bfc1ae-d117-44b7-9f7b-262e41179ec1.png style="max-width: 700px; width: 100%" />
| Model | 0-shot | 5-shot | 10-shot | 50-shot |
| --- | --- | --- | --- | --- |
| https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5 | 0.5243 | 0.5272 | 0.5166 | 0.5352 |
| https://huggingface.co/kakaobrain/kogpt | 0.5590 | 0.5833 | 0.5828 | 0.5907 |
| https://huggingface.co/facebook/xglm-7.5B | 0.5665 | 0.5689 | 0.5565 | 0.5622 |
| https://huggingface.co/EleutherAI/polyglot-ko-1.3b | 0.5247 | 0.5260 | 0.5278 | 0.5427 |
| https://huggingface.co/EleutherAI/polyglot-ko-3.8b | 0.5707 | 0.5830 | 0.5670 | 0.5787 |
| https://huggingface.co/EleutherAI/polyglot-ko-5.8b | 0.5976 | 0.5998 | 0.5979 | 0.6208 |
| https://huggingface.co/EleutherAI/polyglot-ko-12.8b | 0.5954 | 0.6306 | 0.6098 | 0.6118 |
| Llama-2 Original 7B* | 0.415390 | 0.431382 | 0.421342 | 0.442003 |
| Llama-2-Ko-7b 20B (10k) | 0.451757 | 0.466751 | 0.472607 | 0.482776 |
| Llama-2-Ko-7b 40B (20k) | 0.456246 | 0.465665 | 0.469810 | 0.477374 |
*Llama-2 Original 7B used https://huggingface.co/meta-llama/Llama-2-7b-hf (w/o tokenizer updated)
### BoolQ (F1)
<img src=https://user-images.githubusercontent.com/11323660/255576343-5d847a6f-3b6a-41a7-af37-0f11940a5ea4.png style="max-width: 700px; width: 100%" />
| Model | 0-shot | 5-shot | 10-shot | 50-shot |
| --- | --- | --- | --- | --- |
| https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5 | 0.3356 | 0.4014 | 0.3640 | 0.3560 |
| https://huggingface.co/kakaobrain/kogpt | 0.4514 | 0.5981 | 0.5499 | 0.5202 |
| https://huggingface.co/facebook/xglm-7.5B | 0.4464 | 0.3324 | 0.3324 | 0.3324 |
| https://huggingface.co/EleutherAI/polyglot-ko-1.3b | 0.3552 | 0.4751 | 0.4109 | 0.4038 |
| https://huggingface.co/EleutherAI/polyglot-ko-3.8b | 0.4320 | 0.5263 | 0.4930 | 0.4038 |
| https://huggingface.co/EleutherAI/polyglot-ko-5.8b | 0.4356 | 0.5698 | 0.5187 | 0.5236 |
| https://huggingface.co/EleutherAI/polyglot-ko-12.8b | 0.4818 | 0.6041 | 0.6289 | 0.6448 |
| Llama-2 Original 7B* | 0.352050 | 0.563238 | 0.474788 | 0.419222 |
| Llama-2-Ko-7b 20B (10k) | 0.360656 | 0.679743 | 0.680109 | 0.662152 |
| Llama-2-Ko-7b 40B (20k) | 0.578640 | 0.697747 | 0.708358 | 0.714423 |
*Llama-2 Original 7B used https://huggingface.co/meta-llama/Llama-2-7b-hf (w/o tokenizer updated)
### SentiNeg (F1)
<img src=https://user-images.githubusercontent.com/11323660/255576572-b005a81d-fa4d-4709-b48a-f0fe4eed17a3.png style="max-width: 700px; width: 100%" />
| Model | 0-shot | 5-shot | 10-shot | 50-shot |
| --- | --- | --- | --- | --- |
| https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5 | 0.6065 | 0.6878 | 0.7280 | 0.8413 |
| https://huggingface.co/kakaobrain/kogpt | 0.3747 | 0.8942 | 0.9294 | 0.9698 |
| https://huggingface.co/facebook/xglm-7.5B | 0.3578 | 0.4471 | 0.3964 | 0.5271 |
| https://huggingface.co/EleutherAI/polyglot-ko-1.3b | 0.6790 | 0.6257 | 0.5514 | 0.7851 |
| https://huggingface.co/EleutherAI/polyglot-ko-3.8b | 0.4858 | 0.7950 | 0.7320 | 0.7851 |
| https://huggingface.co/EleutherAI/polyglot-ko-5.8b | 0.3394 | 0.8841 | 0.8808 | 0.9521 |
| https://huggingface.co/EleutherAI/polyglot-ko-12.8b | 0.9117 | 0.9015 | 0.9345 | 0.9723 |
| Llama-2 Original 7B* | 0.347502 | 0.529124 | 0.480641 | 0.788457 |
| Llama-2-Ko-7b 20B (10k) | 0.485546 | 0.829503 | 0.871141 | 0.851253 |
| Llama-2-Ko-7b 40B (20k) | 0.459447 | 0.761079 | 0.727611 | 0.936988 |
*Llama-2 Original 7B used https://huggingface.co/meta-llama/Llama-2-7b-hf (w/o tokenizer updated)
## Note for oobabooga/text-generation-webui
Remove `ValueError` at `load_tokenizer` function(line 109 or near), in `modules/models.py`.
```python
diff --git a/modules/models.py b/modules/models.py
index 232d5fa..de5b7a0 100644
--- a/modules/models.py
+++ b/modules/models.py
@@ -106,7 +106,7 @@ def load_tokenizer(model_name, model):
trust_remote_code=shared.args.trust_remote_code,
use_fast=False
)
- except ValueError:
+ except:
tokenizer = AutoTokenizer.from_pretrained(
path_to_model,
trust_remote_code=shared.args.trust_remote_code,
```
Since Llama-2-Ko uses FastTokenizer provided by HF tokenizers NOT sentencepiece package,
it is required to use `use_fast=True` option when initialize tokenizer.
Apple Sillicon does not support BF16 computing, use CPU instead. (BF16 is supported when using NVIDIA GPU)
## Citation
```
@misc {l._junbum_2023,
author = { {L. Junbum} },
title = { llama-2-ko-7b (Revision 4a9993e) },
year = 2023,
url = { https://huggingface.co/beomi/llama-2-ko-7b },
doi = { 10.57967/hf/1098 },
publisher = { Hugging Face }
}
```
## Acknowledgement
The training is supported by [TPU Research Cloud](https://sites.research.google/trc/) program.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_beomi__llama-2-ko-7b)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 39.43 |
| ARC (25-shot) | 48.46 |
| HellaSwag (10-shot) | 75.28 |
| MMLU (5-shot) | 39.56 |
| TruthfulQA (0-shot) | 34.49 |
| Winogrande (5-shot) | 72.14 |
| GSM8K (5-shot) | 1.97 |
| DROP (3-shot) | 4.1 |
| {} | RichardErkhov/beomi_-_llama-2-ko-7b-gguf | null | [
"gguf",
"region:us"
] | null | 2024-05-03T16:15:23+00:00 |
null | null | {} | vup2p/model_sn25_44 | null | [
"region:us"
] | null | 2024-05-03T16:15:33+00:00 |
|
null | null | {} | Sebasvargas/CursoAI | null | [
"region:us"
] | null | 2024-05-03T16:15:38+00:00 |
|
text-classification | transformers | {} | Paral1ax/vk_model_clf_distilbert | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:15:48+00:00 |
|
audio-to-audio | null | Jack from Genndy Tartakovsky's "Samurai Jack" Cartoon Network/Adult Swim show!
For RVC/RVCv2 AI Covers
Made with Weights.gg
Feel free to use! Don't forget to credit if used
Note: Jack's voice is naturally deep so don't forget to adjust your song's pitch so he can fully sound like himself
Model by Radaverse | @samurairad | {"language": ["en"], "license": "openrail", "tags": ["RVC", "RVCv2", "AI", "Cover", "Voice", "Cartoon", "Samurai Jack"], "pipeline_tag": "audio-to-audio"} | Radaverse/SamuraiJack-RVCv2 | null | [
"RVC",
"RVCv2",
"AI",
"Cover",
"Voice",
"Cartoon",
"Samurai Jack",
"audio-to-audio",
"en",
"license:openrail",
"region:us"
] | null | 2024-05-03T16:15:49+00:00 |
null | null | {} | FanFierik/NoShaveNoHaircut | null | [
"region:us"
] | null | 2024-05-03T16:15:57+00:00 |
|
null | null | {} | GeorgeImmanuel/autonomous_taxi_ride | null | [
"region:us"
] | null | 2024-05-03T16:16:19+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K79me3-seqsight_65536_512_47M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H3K79me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K79me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4367
- F1 Score: 0.8170
- Accuracy: 0.8173
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5342 | 1.1 | 200 | 0.4669 | 0.8027 | 0.8027 |
| 0.4727 | 2.21 | 400 | 0.4585 | 0.7966 | 0.7982 |
| 0.4678 | 3.31 | 600 | 0.4500 | 0.8050 | 0.8058 |
| 0.4585 | 4.42 | 800 | 0.4482 | 0.8051 | 0.8062 |
| 0.4586 | 5.52 | 1000 | 0.4469 | 0.8050 | 0.8062 |
| 0.4519 | 6.63 | 1200 | 0.4499 | 0.8032 | 0.8048 |
| 0.4567 | 7.73 | 1400 | 0.4412 | 0.8097 | 0.8103 |
| 0.4482 | 8.84 | 1600 | 0.4460 | 0.8039 | 0.8051 |
| 0.4492 | 9.94 | 1800 | 0.4426 | 0.8105 | 0.8103 |
| 0.4476 | 11.05 | 2000 | 0.4397 | 0.8074 | 0.8083 |
| 0.4472 | 12.15 | 2200 | 0.4359 | 0.8109 | 0.8114 |
| 0.4424 | 13.26 | 2400 | 0.4347 | 0.8093 | 0.8100 |
| 0.4412 | 14.36 | 2600 | 0.4350 | 0.8097 | 0.8100 |
| 0.4441 | 15.47 | 2800 | 0.4438 | 0.8012 | 0.8031 |
| 0.4389 | 16.57 | 3000 | 0.4347 | 0.8085 | 0.8089 |
| 0.4408 | 17.68 | 3200 | 0.4338 | 0.8093 | 0.8100 |
| 0.4352 | 18.78 | 3400 | 0.4318 | 0.8126 | 0.8128 |
| 0.4363 | 19.89 | 3600 | 0.4363 | 0.8085 | 0.8096 |
| 0.4377 | 20.99 | 3800 | 0.4340 | 0.8094 | 0.8100 |
| 0.4367 | 22.1 | 4000 | 0.4326 | 0.8103 | 0.8110 |
| 0.4356 | 23.2 | 4200 | 0.4325 | 0.8113 | 0.8121 |
| 0.436 | 24.31 | 4400 | 0.4342 | 0.8125 | 0.8131 |
| 0.4275 | 25.41 | 4600 | 0.4359 | 0.8140 | 0.8148 |
| 0.4331 | 26.52 | 4800 | 0.4318 | 0.8132 | 0.8135 |
| 0.4341 | 27.62 | 5000 | 0.4310 | 0.8130 | 0.8135 |
| 0.4297 | 28.73 | 5200 | 0.4298 | 0.8112 | 0.8117 |
| 0.428 | 29.83 | 5400 | 0.4309 | 0.8138 | 0.8141 |
| 0.4299 | 30.94 | 5600 | 0.4318 | 0.8105 | 0.8107 |
| 0.4299 | 32.04 | 5800 | 0.4303 | 0.8141 | 0.8141 |
| 0.4309 | 33.15 | 6000 | 0.4284 | 0.8149 | 0.8152 |
| 0.4284 | 34.25 | 6200 | 0.4307 | 0.8125 | 0.8128 |
| 0.4275 | 35.36 | 6400 | 0.4322 | 0.8123 | 0.8131 |
| 0.4272 | 36.46 | 6600 | 0.4292 | 0.8162 | 0.8162 |
| 0.4286 | 37.57 | 6800 | 0.4303 | 0.8141 | 0.8145 |
| 0.4263 | 38.67 | 7000 | 0.4320 | 0.8136 | 0.8141 |
| 0.4246 | 39.78 | 7200 | 0.4304 | 0.8165 | 0.8166 |
| 0.4268 | 40.88 | 7400 | 0.4290 | 0.8150 | 0.8152 |
| 0.4263 | 41.99 | 7600 | 0.4290 | 0.8153 | 0.8155 |
| 0.4243 | 43.09 | 7800 | 0.4303 | 0.8161 | 0.8166 |
| 0.4262 | 44.2 | 8000 | 0.4295 | 0.8141 | 0.8145 |
| 0.4233 | 45.3 | 8200 | 0.4301 | 0.8152 | 0.8155 |
| 0.4256 | 46.41 | 8400 | 0.4286 | 0.8148 | 0.8152 |
| 0.4238 | 47.51 | 8600 | 0.4293 | 0.8156 | 0.8159 |
| 0.4236 | 48.62 | 8800 | 0.4312 | 0.8136 | 0.8141 |
| 0.4221 | 49.72 | 9000 | 0.4301 | 0.8142 | 0.8145 |
| 0.4283 | 50.83 | 9200 | 0.4296 | 0.8131 | 0.8135 |
| 0.4232 | 51.93 | 9400 | 0.4299 | 0.8142 | 0.8145 |
| 0.4238 | 53.04 | 9600 | 0.4297 | 0.8142 | 0.8145 |
| 0.4218 | 54.14 | 9800 | 0.4295 | 0.8149 | 0.8152 |
| 0.424 | 55.25 | 10000 | 0.4300 | 0.8145 | 0.8148 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H3K79me3-seqsight_65536_512_47M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K79me3-seqsight_65536_512_47M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:16:34+00:00 |
text-generation | transformers |
<img src="./llama-3-merges.webp" alt="Llama-3 DPO Logo" width="500" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4
This model is a fine-tune (DPO) of `meta-llama/Meta-Llama-3-70B-Instruct` model.
# Quantized GGUF
All GGUF models are available here: [MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4-GGUF)
# Prompt Template
This model uses `ChatML` prompt template:
```
<|im_start|>system
{System}
<|im_end|>
<|im_start|>user
{User}
<|im_end|>
<|im_start|>assistant
{Assistant}
````
# How to use
You can use this model by using `MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4` as the model name in Hugging Face's
transformers library.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
from transformers import pipeline
import torch
model_id = "MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4"
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True,
# attn_implementation="flash_attention_2"
)
tokenizer = AutoTokenizer.from_pretrained(
model_id,
trust_remote_code=True
)
streamer = TextStreamer(tokenizer)
pipeline = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
model_kwargs={"torch_dtype": torch.bfloat16},
streamer=streamer
)
# Then you can use the pipeline to generate text.
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|im_end|>"),
tokenizer.convert_tokens_to_ids("<|eot_id|>") # safer to have this too
]
outputs = pipeline(
prompt,
max_new_tokens=2048,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.95,
)
print(outputs[0]["generated_text"][len(prompt):])
```
## Sample Response
```
<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
describe about pros and cons of docker system.<|im_end|>
<|im_start|>assistant
```
```
Here are the pros and cons of the Docker system:
**Pros:**
1. **Lightweight**: Docker containers are lightweight and use fewer resources compared to traditional virtual machines.
2. **Isolation**: Docker provides a high level of isolation between containers, ensuring that each application runs independently without interfering with others.
3. **Portability**: Docker containers are highly portable and can run on any system that supports Docker, without requiring specific dependencies or configurations.
4. **Efficient Use of Resources**: Docker containers can share the same kernel and use fewer resources, making them more efficient than traditional virtual machines.
5. **Fast Deployment**: Docker containers can be spun up quickly, making it ideal for rapid deployment and scaling of applications.
6. **Easy Maintenance**: Docker provides a simple way to manage and update applications, with features like rolling updates and rollbacks.
7. **Security**: Docker provides a secure environment for applications, with features like network isolation and encryption.
8. **Large Ecosystem**: Docker has a large and active ecosystem, with a wide range of tools and plugins available for various tasks.
9. **Version Control**: Docker provides an easy way to manage different versions of an application, with features like version tagging and rollbacks.
10. **Cost-Effective**: Docker can help reduce costs by minimizing the need for virtual machines and reducing the overhead of resource allocation.
**Cons:**
1. **Steep Learning Curve**: Docker has a complex architecture and requires a good understanding of containerization, networking, and storage.
2. **Overhead**: While Docker containers are lightweight, they still require some overhead in terms of resources and management.
3. **Security Risks**: If not properly configured, Docker containers can introduce security risks, such as unauthorized access to the host system.
4. **Limited Support for GUI Applications**: Docker is primarily designed for running command-line applications, and running GUI applications can be challenging.
5. **Networking Complexity**: Docker's networking model can be complex, especially in large-scale deployments.
6. **Storage Complexity**: Docker's storage model can be complex, especially when dealing with persistent data.
7. **Limited Support for Legacy Applications**: Docker is designed for modern, cloud-native applications, and may not be suitable for legacy applications that rely on older technologies.
8. **Dependence on Docker Hub**: Docker relies heavily on Docker Hub, which can be a single point of failure if the hub is down or experiencing issues.
9. **Orchestration Complexity**: Docker provides a built-in orchestration tool, Docker Swarm, but it can be complex to set up and manage.
10. **Vendor Lock-in**: Docker is a proprietary technology, and while it has a large ecosystem, it can be difficult to switch to alternative containerization platforms.
Overall, Docker provides a powerful and flexible way to deploy and manage applications, but it requires careful planning, configuration, and management to ensure optimal performance and security.
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__Llama-3-70B-Instruct-DPO-v0.4)
| Metric |Value|
|---------------------------------|----:|
|Avg. |78.89|
|AI2 Reasoning Challenge (25-Shot)|72.61|
|HellaSwag (10-Shot) |86.03|
|MMLU (5-Shot) |80.50|
|TruthfulQA (0-shot) |63.26|
|Winogrande (5-shot) |83.58|
|GSM8k (5-shot) |87.34|
| {"language": ["en"], "license": "llama3", "library_name": "transformers", "tags": ["axolotl", "finetune", "dpo", "facebook", "meta", "pytorch", "llama", "llama-3", "chatml"], "datasets": ["argilla/ultrafeedback-binarized-preferences"], "base_model": "meta-llama/Meta-Llama-3-70B-Instruct", "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "LICENSE", "inference": false, "model_creator": "MaziyarPanahi", "quantized_by": "MaziyarPanahi", "model-index": [{"name": "Llama-3-70B-Instruct-DPO-v0.4", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 72.61, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 86.03, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 80.5, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 63.26}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 83.58, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 87.34, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.4", "name": "Open LLM Leaderboard"}}]}]} | blockblockblock/Llama-3-70B-Instruct-DPO-v0.4-bpw2.5-exl2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"axolotl",
"finetune",
"dpo",
"facebook",
"meta",
"pytorch",
"llama-3",
"chatml",
"conversational",
"en",
"dataset:argilla/ultrafeedback-binarized-preferences",
"base_model:meta-llama/Meta-Llama-3-70B-Instruct",
"license:llama3",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:17:38+00:00 |
null | null | {} | nzsadvdfgzv/test | null | [
"region:us"
] | null | 2024-05-03T16:18:13+00:00 |
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | ar08/ar08 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:19:14+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K79me3-seqsight_65536_512_47M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H3K79me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K79me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4309
- F1 Score: 0.8185
- Accuracy: 0.8187
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5148 | 1.1 | 200 | 0.4601 | 0.8090 | 0.8089 |
| 0.4624 | 2.21 | 400 | 0.4479 | 0.8057 | 0.8065 |
| 0.4592 | 3.31 | 600 | 0.4440 | 0.8066 | 0.8076 |
| 0.4474 | 4.42 | 800 | 0.4400 | 0.8033 | 0.8044 |
| 0.4463 | 5.52 | 1000 | 0.4429 | 0.8030 | 0.8048 |
| 0.4386 | 6.63 | 1200 | 0.4471 | 0.8024 | 0.8048 |
| 0.4403 | 7.73 | 1400 | 0.4353 | 0.8077 | 0.8089 |
| 0.4328 | 8.84 | 1600 | 0.4478 | 0.8019 | 0.8041 |
| 0.4305 | 9.94 | 1800 | 0.4266 | 0.8190 | 0.8190 |
| 0.4265 | 11.05 | 2000 | 0.4371 | 0.8041 | 0.8051 |
| 0.4265 | 12.15 | 2200 | 0.4269 | 0.8185 | 0.8183 |
| 0.4207 | 13.26 | 2400 | 0.4243 | 0.8151 | 0.8155 |
| 0.4176 | 14.36 | 2600 | 0.4245 | 0.8184 | 0.8183 |
| 0.4192 | 15.47 | 2800 | 0.4285 | 0.8111 | 0.8117 |
| 0.414 | 16.57 | 3000 | 0.4283 | 0.8175 | 0.8173 |
| 0.4149 | 17.68 | 3200 | 0.4244 | 0.8161 | 0.8162 |
| 0.4094 | 18.78 | 3400 | 0.4262 | 0.8172 | 0.8176 |
| 0.4091 | 19.89 | 3600 | 0.4239 | 0.8140 | 0.8141 |
| 0.4087 | 20.99 | 3800 | 0.4302 | 0.8091 | 0.8100 |
| 0.4076 | 22.1 | 4000 | 0.4246 | 0.8108 | 0.8114 |
| 0.4059 | 23.2 | 4200 | 0.4253 | 0.8144 | 0.8148 |
| 0.4057 | 24.31 | 4400 | 0.4300 | 0.8124 | 0.8131 |
| 0.3982 | 25.41 | 4600 | 0.4299 | 0.8159 | 0.8162 |
| 0.4019 | 26.52 | 4800 | 0.4289 | 0.8184 | 0.8187 |
| 0.4036 | 27.62 | 5000 | 0.4294 | 0.8112 | 0.8121 |
| 0.3975 | 28.73 | 5200 | 0.4243 | 0.8114 | 0.8121 |
| 0.3938 | 29.83 | 5400 | 0.4255 | 0.8134 | 0.8138 |
| 0.3966 | 30.94 | 5600 | 0.4280 | 0.8160 | 0.8162 |
| 0.3953 | 32.04 | 5800 | 0.4275 | 0.8214 | 0.8214 |
| 0.3972 | 33.15 | 6000 | 0.4261 | 0.8150 | 0.8155 |
| 0.3931 | 34.25 | 6200 | 0.4297 | 0.8170 | 0.8173 |
| 0.3914 | 35.36 | 6400 | 0.4287 | 0.8140 | 0.8145 |
| 0.393 | 36.46 | 6600 | 0.4275 | 0.8181 | 0.8183 |
| 0.3901 | 37.57 | 6800 | 0.4299 | 0.8136 | 0.8141 |
| 0.3893 | 38.67 | 7000 | 0.4314 | 0.8153 | 0.8159 |
| 0.3881 | 39.78 | 7200 | 0.4304 | 0.8184 | 0.8187 |
| 0.3886 | 40.88 | 7400 | 0.4277 | 0.8189 | 0.8190 |
| 0.3859 | 41.99 | 7600 | 0.4314 | 0.8162 | 0.8166 |
| 0.3869 | 43.09 | 7800 | 0.4308 | 0.8169 | 0.8173 |
| 0.3859 | 44.2 | 8000 | 0.4329 | 0.8149 | 0.8155 |
| 0.3839 | 45.3 | 8200 | 0.4341 | 0.8159 | 0.8162 |
| 0.3871 | 46.41 | 8400 | 0.4291 | 0.8184 | 0.8187 |
| 0.3848 | 47.51 | 8600 | 0.4327 | 0.8172 | 0.8176 |
| 0.3837 | 48.62 | 8800 | 0.4334 | 0.8164 | 0.8169 |
| 0.383 | 49.72 | 9000 | 0.4334 | 0.8158 | 0.8162 |
| 0.388 | 50.83 | 9200 | 0.4328 | 0.8160 | 0.8166 |
| 0.3826 | 51.93 | 9400 | 0.4316 | 0.8169 | 0.8173 |
| 0.3819 | 53.04 | 9600 | 0.4315 | 0.8166 | 0.8169 |
| 0.3815 | 54.14 | 9800 | 0.4318 | 0.8170 | 0.8173 |
| 0.3831 | 55.25 | 10000 | 0.4325 | 0.8166 | 0.8169 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H3K79me3-seqsight_65536_512_47M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K79me3-seqsight_65536_512_47M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:19:22+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K79me3-seqsight_65536_512_47M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H3K79me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K79me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4374
- F1 Score: 0.8193
- Accuracy: 0.8193
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5042 | 1.1 | 200 | 0.4523 | 0.8134 | 0.8135 |
| 0.4575 | 2.21 | 400 | 0.4432 | 0.8135 | 0.8135 |
| 0.453 | 3.31 | 600 | 0.4381 | 0.8068 | 0.8079 |
| 0.4367 | 4.42 | 800 | 0.4333 | 0.8096 | 0.8107 |
| 0.4327 | 5.52 | 1000 | 0.4302 | 0.8134 | 0.8145 |
| 0.423 | 6.63 | 1200 | 0.4528 | 0.8043 | 0.8065 |
| 0.4233 | 7.73 | 1400 | 0.4418 | 0.8010 | 0.8031 |
| 0.4154 | 8.84 | 1600 | 0.4534 | 0.7936 | 0.7961 |
| 0.4116 | 9.94 | 1800 | 0.4231 | 0.8144 | 0.8145 |
| 0.4052 | 11.05 | 2000 | 0.4394 | 0.8028 | 0.8037 |
| 0.4028 | 12.15 | 2200 | 0.4245 | 0.8196 | 0.8197 |
| 0.397 | 13.26 | 2400 | 0.4251 | 0.8144 | 0.8148 |
| 0.3917 | 14.36 | 2600 | 0.4285 | 0.8201 | 0.8200 |
| 0.3907 | 15.47 | 2800 | 0.4296 | 0.8129 | 0.8131 |
| 0.3827 | 16.57 | 3000 | 0.4302 | 0.8171 | 0.8169 |
| 0.3821 | 17.68 | 3200 | 0.4380 | 0.8186 | 0.8187 |
| 0.3754 | 18.78 | 3400 | 0.4418 | 0.8105 | 0.8110 |
| 0.371 | 19.89 | 3600 | 0.4367 | 0.8177 | 0.8176 |
| 0.3684 | 20.99 | 3800 | 0.4477 | 0.8107 | 0.8110 |
| 0.3639 | 22.1 | 4000 | 0.4422 | 0.8158 | 0.8159 |
| 0.3605 | 23.2 | 4200 | 0.4480 | 0.8144 | 0.8145 |
| 0.3561 | 24.31 | 4400 | 0.4502 | 0.8163 | 0.8166 |
| 0.3478 | 25.41 | 4600 | 0.4584 | 0.8175 | 0.8173 |
| 0.3503 | 26.52 | 4800 | 0.4596 | 0.8121 | 0.8121 |
| 0.3491 | 27.62 | 5000 | 0.4524 | 0.8113 | 0.8117 |
| 0.3407 | 28.73 | 5200 | 0.4644 | 0.8110 | 0.8117 |
| 0.3349 | 29.83 | 5400 | 0.4509 | 0.8151 | 0.8152 |
| 0.3364 | 30.94 | 5600 | 0.4585 | 0.8171 | 0.8169 |
| 0.3328 | 32.04 | 5800 | 0.4492 | 0.8199 | 0.8197 |
| 0.3307 | 33.15 | 6000 | 0.4530 | 0.8164 | 0.8166 |
| 0.3277 | 34.25 | 6200 | 0.4746 | 0.8175 | 0.8173 |
| 0.3223 | 35.36 | 6400 | 0.4711 | 0.8181 | 0.8183 |
| 0.3192 | 36.46 | 6600 | 0.4757 | 0.8187 | 0.8187 |
| 0.3178 | 37.57 | 6800 | 0.4753 | 0.8139 | 0.8141 |
| 0.3153 | 38.67 | 7000 | 0.4703 | 0.8165 | 0.8169 |
| 0.3129 | 39.78 | 7200 | 0.4812 | 0.8196 | 0.8197 |
| 0.3105 | 40.88 | 7400 | 0.4763 | 0.8143 | 0.8141 |
| 0.3064 | 41.99 | 7600 | 0.4652 | 0.8180 | 0.8180 |
| 0.306 | 43.09 | 7800 | 0.4787 | 0.8145 | 0.8145 |
| 0.3041 | 44.2 | 8000 | 0.4898 | 0.8150 | 0.8152 |
| 0.3014 | 45.3 | 8200 | 0.4882 | 0.8173 | 0.8173 |
| 0.3005 | 46.41 | 8400 | 0.4859 | 0.8173 | 0.8173 |
| 0.3006 | 47.51 | 8600 | 0.4895 | 0.8143 | 0.8145 |
| 0.2973 | 48.62 | 8800 | 0.4882 | 0.8124 | 0.8124 |
| 0.2961 | 49.72 | 9000 | 0.4937 | 0.8140 | 0.8141 |
| 0.3008 | 50.83 | 9200 | 0.4829 | 0.8128 | 0.8131 |
| 0.2934 | 51.93 | 9400 | 0.4918 | 0.8133 | 0.8135 |
| 0.2928 | 53.04 | 9600 | 0.4910 | 0.8149 | 0.8148 |
| 0.2936 | 54.14 | 9800 | 0.4936 | 0.8156 | 0.8155 |
| 0.2934 | 55.25 | 10000 | 0.4941 | 0.8135 | 0.8135 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H3K79me3-seqsight_65536_512_47M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K79me3-seqsight_65536_512_47M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:19:50+00:00 |
null | null | {} | barrybadpak/agfdrvgb | null | [
"region:us"
] | null | 2024-05-03T16:21:59+00:00 |
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | InayaKripa/gemma-2b-toxic-ConvoV1 | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:23:48+00:00 |
text-classification | transformers |
This model is a finetuned version of ```distilbert/distilbert-base-multilingual-cased``` model in the **Bengali** and **Hindi** languages.
The dataset used is a Kaggle Dataset - [Modified-hate-speech-bengali-hindi](https://www.kaggle.com/datasets/abirmondal/modified-hate-speech-bengali-hindi)
This model can classify Bengali and Hindi texts into the following 5 classes:
- defamation
- hate
- non-hate
- violence
- vulgar | {"language": ["bn", "hi"], "license": "apache-2.0"} | kingshukroy/distilbert-base-multilingual-cased-hate-speech-ben-hin | null | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"bn",
"hi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:23:54+00:00 |
null | null | {} | vinven7/Llama2-ft-MatKG | null | [
"region:us"
] | null | 2024-05-03T16:24:37+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_finetued_on_scigen_server
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 256
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 256
- total_train_batch_size: 65536
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 30
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "model-index": [{"name": "mistral_finetued_on_scigen_server", "results": []}]} | moetezsa/mistral_finetued_on_scigen_server | null | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-05-03T16:24:44+00:00 |
feature-extraction | transformers | {} | MahmoudTaktak/LEGAL_E2 | null | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:25:14+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Prototipo_5_EMI
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4215
- Accuracy: 0.538
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.2459 | 0.1481 | 200 | 1.2168 | 0.4493 |
| 1.1445 | 0.2963 | 400 | 1.0823 | 0.512 |
| 1.1117 | 0.4444 | 600 | 1.0979 | 0.5053 |
| 1.0618 | 0.5926 | 800 | 1.0457 | 0.5273 |
| 1.0343 | 0.7407 | 1000 | 1.0219 | 0.537 |
| 1.1239 | 0.8889 | 1200 | 1.0353 | 0.5257 |
| 0.9012 | 1.0370 | 1400 | 1.0637 | 0.5383 |
| 0.86 | 1.1852 | 1600 | 1.0682 | 0.5333 |
| 0.898 | 1.3333 | 1800 | 1.0341 | 0.5483 |
| 0.929 | 1.4815 | 2000 | 1.0437 | 0.5363 |
| 0.9921 | 1.6296 | 2200 | 0.9968 | 0.5473 |
| 0.9776 | 1.7778 | 2400 | 1.0418 | 0.5553 |
| 0.9166 | 1.9259 | 2600 | 0.9874 | 0.5573 |
| 0.703 | 2.0741 | 2800 | 1.0564 | 0.556 |
| 0.8123 | 2.2222 | 3000 | 1.0582 | 0.561 |
| 0.6727 | 2.3704 | 3200 | 1.0942 | 0.5483 |
| 0.6843 | 2.5185 | 3400 | 1.1128 | 0.558 |
| 0.7528 | 2.6667 | 3600 | 1.0823 | 0.5547 |
| 0.7747 | 2.8148 | 3800 | 1.0744 | 0.5497 |
| 0.7471 | 2.9630 | 4000 | 1.0749 | 0.5527 |
| 0.5774 | 3.1111 | 4200 | 1.1422 | 0.552 |
| 0.6105 | 3.2593 | 4400 | 1.2226 | 0.543 |
| 0.573 | 3.4074 | 4600 | 1.2427 | 0.5417 |
| 0.6047 | 3.5556 | 4800 | 1.2403 | 0.537 |
| 0.5334 | 3.7037 | 5000 | 1.2470 | 0.5413 |
| 0.5688 | 3.8519 | 5200 | 1.2585 | 0.5507 |
| 0.4928 | 4.0 | 5400 | 1.2653 | 0.5437 |
| 0.4314 | 4.1481 | 5600 | 1.3419 | 0.541 |
| 0.4556 | 4.2963 | 5800 | 1.3677 | 0.5413 |
| 0.4815 | 4.4444 | 6000 | 1.3912 | 0.5407 |
| 0.4431 | 4.5926 | 6200 | 1.4004 | 0.5347 |
| 0.4312 | 4.7407 | 6400 | 1.4161 | 0.5397 |
| 0.459 | 4.8889 | 6600 | 1.4215 | 0.538 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "dccuchile/bert-base-spanish-wwm-uncased", "model-index": [{"name": "Prototipo_5_EMI", "results": []}]} | Armandodelca/Prototipo_5_EMI | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:25:41+00:00 |
text-generation | transformers | Barcenas 3.8b
Based on the Phi-3-Mini-4K-Boost from DeepMount00 and trained with the pinzhenchen/alpaca-cleaned-es dataset, to improve Spanish conversations.
The goal of this model is to have a small LLM that can express itself correctly and fluently in the Spanish language.
Made with โค๏ธ in Guadalupe, Nuevo Leon, Mexico ๐ฒ๐ฝ | {"language": ["es", "en"], "license": "mit", "tags": ["phi"]} | Danielbrdz/Barcenas-3.8b | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"phi",
"conversational",
"es",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:25:51+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | OwOpeepeepoopoo/herewegoagain15c | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:25:57+00:00 |
null | null | {"license": "unknown"} | charaatasnim/chekpointzip | null | [
"license:unknown",
"region:us"
] | null | 2024-05-03T16:27:42+00:00 |
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | golf2248/l9w0l2v | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:27:49+00:00 |
text-generation | transformers | {"license": "mit"} | sheikhfoysal/chatEDU-v2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:28:49+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me1-seqsight_65536_512_47M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5417
- F1 Score: 0.7453
- Accuracy: 0.7465
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6298 | 1.01 | 200 | 0.6057 | 0.6909 | 0.6929 |
| 0.6045 | 2.02 | 400 | 0.6003 | 0.6998 | 0.7004 |
| 0.5948 | 3.03 | 600 | 0.5930 | 0.7061 | 0.7074 |
| 0.5909 | 4.04 | 800 | 0.5868 | 0.7121 | 0.7146 |
| 0.5809 | 5.05 | 1000 | 0.5821 | 0.7155 | 0.7172 |
| 0.578 | 6.06 | 1200 | 0.5811 | 0.7148 | 0.7191 |
| 0.573 | 7.07 | 1400 | 0.5783 | 0.7180 | 0.7213 |
| 0.5737 | 8.08 | 1600 | 0.5748 | 0.7206 | 0.7235 |
| 0.5703 | 9.09 | 1800 | 0.5703 | 0.7262 | 0.7279 |
| 0.5664 | 10.1 | 2000 | 0.5725 | 0.7213 | 0.7222 |
| 0.5643 | 11.11 | 2200 | 0.5712 | 0.7248 | 0.7270 |
| 0.5647 | 12.12 | 2400 | 0.5695 | 0.7278 | 0.7292 |
| 0.563 | 13.13 | 2600 | 0.5682 | 0.7251 | 0.7270 |
| 0.5629 | 14.14 | 2800 | 0.5641 | 0.7292 | 0.7314 |
| 0.5582 | 15.15 | 3000 | 0.5625 | 0.7289 | 0.7307 |
| 0.5586 | 16.16 | 3200 | 0.5639 | 0.7267 | 0.7295 |
| 0.5564 | 17.17 | 3400 | 0.5630 | 0.7295 | 0.7323 |
| 0.5565 | 18.18 | 3600 | 0.5582 | 0.7325 | 0.7336 |
| 0.5531 | 19.19 | 3800 | 0.5613 | 0.7311 | 0.7336 |
| 0.5546 | 20.2 | 4000 | 0.5590 | 0.7314 | 0.7330 |
| 0.5507 | 21.21 | 4200 | 0.5631 | 0.7337 | 0.7367 |
| 0.552 | 22.22 | 4400 | 0.5596 | 0.7344 | 0.7358 |
| 0.5501 | 23.23 | 4600 | 0.5615 | 0.7342 | 0.7367 |
| 0.5545 | 24.24 | 4800 | 0.5566 | 0.7385 | 0.7405 |
| 0.5478 | 25.25 | 5000 | 0.5563 | 0.7372 | 0.7386 |
| 0.5501 | 26.26 | 5200 | 0.5585 | 0.7345 | 0.7355 |
| 0.5459 | 27.27 | 5400 | 0.5563 | 0.7350 | 0.7367 |
| 0.5483 | 28.28 | 5600 | 0.5585 | 0.7332 | 0.7339 |
| 0.5521 | 29.29 | 5800 | 0.5566 | 0.7362 | 0.7386 |
| 0.5451 | 30.3 | 6000 | 0.5552 | 0.7358 | 0.7371 |
| 0.5469 | 31.31 | 6200 | 0.5547 | 0.7378 | 0.7396 |
| 0.5478 | 32.32 | 6400 | 0.5564 | 0.7350 | 0.7380 |
| 0.5417 | 33.33 | 6600 | 0.5552 | 0.7365 | 0.7390 |
| 0.5433 | 34.34 | 6800 | 0.5562 | 0.7347 | 0.7377 |
| 0.5425 | 35.35 | 7000 | 0.5530 | 0.7407 | 0.7421 |
| 0.5473 | 36.36 | 7200 | 0.5529 | 0.7363 | 0.7380 |
| 0.5431 | 37.37 | 7400 | 0.5531 | 0.7360 | 0.7374 |
| 0.542 | 38.38 | 7600 | 0.5538 | 0.7373 | 0.7383 |
| 0.5421 | 39.39 | 7800 | 0.5536 | 0.7358 | 0.7371 |
| 0.544 | 40.4 | 8000 | 0.5536 | 0.7371 | 0.7386 |
| 0.5428 | 41.41 | 8200 | 0.5535 | 0.7369 | 0.7393 |
| 0.5474 | 42.42 | 8400 | 0.5529 | 0.7382 | 0.7405 |
| 0.5415 | 43.43 | 8600 | 0.5530 | 0.7345 | 0.7364 |
| 0.54 | 44.44 | 8800 | 0.5529 | 0.7367 | 0.7383 |
| 0.5416 | 45.45 | 9000 | 0.5526 | 0.7380 | 0.7396 |
| 0.5424 | 46.46 | 9200 | 0.5527 | 0.7364 | 0.7383 |
| 0.5451 | 47.47 | 9400 | 0.5525 | 0.7361 | 0.7380 |
| 0.5367 | 48.48 | 9600 | 0.5528 | 0.7359 | 0.7377 |
| 0.5461 | 49.49 | 9800 | 0.5524 | 0.7362 | 0.7380 |
| 0.5409 | 50.51 | 10000 | 0.5525 | 0.7369 | 0.7386 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H3K4me1-seqsight_65536_512_47M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me1-seqsight_65536_512_47M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:28:56+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me1-seqsight_65536_512_47M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5342
- F1 Score: 0.7479
- Accuracy: 0.7503
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6119 | 1.01 | 200 | 0.5885 | 0.7076 | 0.7118 |
| 0.5758 | 2.02 | 400 | 0.5650 | 0.7279 | 0.7295 |
| 0.5634 | 3.03 | 600 | 0.5565 | 0.7319 | 0.7342 |
| 0.5565 | 4.04 | 800 | 0.5557 | 0.7387 | 0.7405 |
| 0.5482 | 5.05 | 1000 | 0.5473 | 0.7424 | 0.7434 |
| 0.543 | 6.06 | 1200 | 0.5546 | 0.7293 | 0.7345 |
| 0.5323 | 7.07 | 1400 | 0.5504 | 0.7414 | 0.7440 |
| 0.531 | 8.08 | 1600 | 0.5479 | 0.7375 | 0.7402 |
| 0.5291 | 9.09 | 1800 | 0.5429 | 0.7434 | 0.7459 |
| 0.5196 | 10.1 | 2000 | 0.5444 | 0.7473 | 0.7494 |
| 0.5147 | 11.11 | 2200 | 0.5500 | 0.7482 | 0.7513 |
| 0.5142 | 12.12 | 2400 | 0.5434 | 0.7444 | 0.7462 |
| 0.5088 | 13.13 | 2600 | 0.5517 | 0.7404 | 0.7431 |
| 0.5083 | 14.14 | 2800 | 0.5411 | 0.7478 | 0.7494 |
| 0.4974 | 15.15 | 3000 | 0.5450 | 0.7427 | 0.7449 |
| 0.4994 | 16.16 | 3200 | 0.5419 | 0.7409 | 0.7440 |
| 0.4928 | 17.17 | 3400 | 0.5453 | 0.7485 | 0.7503 |
| 0.4909 | 18.18 | 3600 | 0.5473 | 0.7428 | 0.7443 |
| 0.4837 | 19.19 | 3800 | 0.5471 | 0.7458 | 0.7478 |
| 0.4813 | 20.2 | 4000 | 0.5473 | 0.7402 | 0.7421 |
| 0.4778 | 21.21 | 4200 | 0.5463 | 0.7410 | 0.7440 |
| 0.473 | 22.22 | 4400 | 0.5585 | 0.7418 | 0.7434 |
| 0.4706 | 23.23 | 4600 | 0.5573 | 0.7435 | 0.7446 |
| 0.4706 | 24.24 | 4800 | 0.5544 | 0.7398 | 0.7424 |
| 0.4626 | 25.25 | 5000 | 0.5588 | 0.7457 | 0.7465 |
| 0.463 | 26.26 | 5200 | 0.5579 | 0.7399 | 0.7408 |
| 0.4582 | 27.27 | 5400 | 0.5557 | 0.7385 | 0.7415 |
| 0.4599 | 28.28 | 5600 | 0.5634 | 0.7389 | 0.7399 |
| 0.4575 | 29.29 | 5800 | 0.5552 | 0.7401 | 0.7431 |
| 0.453 | 30.3 | 6000 | 0.5668 | 0.7389 | 0.7405 |
| 0.4525 | 31.31 | 6200 | 0.5550 | 0.7407 | 0.7421 |
| 0.4521 | 32.32 | 6400 | 0.5617 | 0.7406 | 0.7434 |
| 0.4442 | 33.33 | 6600 | 0.5689 | 0.7395 | 0.7424 |
| 0.442 | 34.34 | 6800 | 0.5624 | 0.7455 | 0.7472 |
| 0.4384 | 35.35 | 7000 | 0.5687 | 0.7431 | 0.7449 |
| 0.4474 | 36.36 | 7200 | 0.5577 | 0.7393 | 0.7412 |
| 0.4343 | 37.37 | 7400 | 0.5663 | 0.7425 | 0.7443 |
| 0.4362 | 38.38 | 7600 | 0.5668 | 0.7413 | 0.7424 |
| 0.4359 | 39.39 | 7800 | 0.5707 | 0.7376 | 0.7386 |
| 0.4341 | 40.4 | 8000 | 0.5797 | 0.7384 | 0.7396 |
| 0.4292 | 41.41 | 8200 | 0.5780 | 0.7390 | 0.7421 |
| 0.4375 | 42.42 | 8400 | 0.5716 | 0.7386 | 0.7405 |
| 0.4293 | 43.43 | 8600 | 0.5735 | 0.7406 | 0.7424 |
| 0.4273 | 44.44 | 8800 | 0.5755 | 0.7391 | 0.7412 |
| 0.4243 | 45.45 | 9000 | 0.5755 | 0.7413 | 0.7427 |
| 0.426 | 46.46 | 9200 | 0.5778 | 0.7400 | 0.7418 |
| 0.4296 | 47.47 | 9400 | 0.5708 | 0.7387 | 0.7408 |
| 0.4172 | 48.48 | 9600 | 0.5781 | 0.7385 | 0.7408 |
| 0.4309 | 49.49 | 9800 | 0.5737 | 0.7396 | 0.7418 |
| 0.4227 | 50.51 | 10000 | 0.5749 | 0.7402 | 0.7421 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H3K4me1-seqsight_65536_512_47M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me1-seqsight_65536_512_47M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:29:06+00:00 |
null | null | {} | lngalmsyr/indobert-hoaxClass | null | [
"region:us"
] | null | 2024-05-03T16:29:10+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me1-seqsight_65536_512_47M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me1) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5308
- F1 Score: 0.7509
- Accuracy: 0.7532
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6204 | 1.01 | 200 | 0.5983 | 0.6993 | 0.7027 |
| 0.5905 | 2.02 | 400 | 0.5808 | 0.7134 | 0.7165 |
| 0.5754 | 3.03 | 600 | 0.5707 | 0.7255 | 0.7279 |
| 0.5712 | 4.04 | 800 | 0.5692 | 0.7207 | 0.7235 |
| 0.5627 | 5.05 | 1000 | 0.5617 | 0.7320 | 0.7339 |
| 0.5601 | 6.06 | 1200 | 0.5642 | 0.7224 | 0.7266 |
| 0.5519 | 7.07 | 1400 | 0.5591 | 0.7345 | 0.7371 |
| 0.5513 | 8.08 | 1600 | 0.5567 | 0.7322 | 0.7352 |
| 0.5475 | 9.09 | 1800 | 0.5500 | 0.7391 | 0.7412 |
| 0.5416 | 10.1 | 2000 | 0.5502 | 0.7406 | 0.7424 |
| 0.5385 | 11.11 | 2200 | 0.5543 | 0.7401 | 0.7427 |
| 0.5387 | 12.12 | 2400 | 0.5486 | 0.7430 | 0.7443 |
| 0.5355 | 13.13 | 2600 | 0.5536 | 0.7396 | 0.7418 |
| 0.5353 | 14.14 | 2800 | 0.5485 | 0.7441 | 0.7462 |
| 0.5297 | 15.15 | 3000 | 0.5485 | 0.7435 | 0.7456 |
| 0.5287 | 16.16 | 3200 | 0.5456 | 0.7401 | 0.7431 |
| 0.5269 | 17.17 | 3400 | 0.5484 | 0.7426 | 0.7453 |
| 0.5257 | 18.18 | 3600 | 0.5428 | 0.7473 | 0.7487 |
| 0.5213 | 19.19 | 3800 | 0.5433 | 0.7402 | 0.7431 |
| 0.5215 | 20.2 | 4000 | 0.5433 | 0.7452 | 0.7472 |
| 0.5196 | 21.21 | 4200 | 0.5514 | 0.7434 | 0.7465 |
| 0.5188 | 22.22 | 4400 | 0.5465 | 0.7448 | 0.7472 |
| 0.5182 | 23.23 | 4600 | 0.5449 | 0.7456 | 0.7481 |
| 0.5198 | 24.24 | 4800 | 0.5446 | 0.7427 | 0.7456 |
| 0.514 | 25.25 | 5000 | 0.5429 | 0.7467 | 0.7481 |
| 0.5147 | 26.26 | 5200 | 0.5465 | 0.7484 | 0.7491 |
| 0.5109 | 27.27 | 5400 | 0.5419 | 0.7468 | 0.7487 |
| 0.514 | 28.28 | 5600 | 0.5464 | 0.7447 | 0.7453 |
| 0.5148 | 29.29 | 5800 | 0.5456 | 0.7457 | 0.7487 |
| 0.5083 | 30.3 | 6000 | 0.5455 | 0.7457 | 0.7472 |
| 0.509 | 31.31 | 6200 | 0.5444 | 0.7466 | 0.7481 |
| 0.5105 | 32.32 | 6400 | 0.5460 | 0.7460 | 0.7484 |
| 0.5053 | 33.33 | 6600 | 0.5483 | 0.7447 | 0.7475 |
| 0.5054 | 34.34 | 6800 | 0.5445 | 0.7463 | 0.7487 |
| 0.5027 | 35.35 | 7000 | 0.5424 | 0.7487 | 0.7503 |
| 0.5086 | 36.36 | 7200 | 0.5405 | 0.7464 | 0.7481 |
| 0.5017 | 37.37 | 7400 | 0.5435 | 0.7440 | 0.7456 |
| 0.4999 | 38.38 | 7600 | 0.5433 | 0.7486 | 0.7497 |
| 0.5025 | 39.39 | 7800 | 0.5442 | 0.7475 | 0.7484 |
| 0.5038 | 40.4 | 8000 | 0.5467 | 0.7470 | 0.7484 |
| 0.501 | 41.41 | 8200 | 0.5445 | 0.7416 | 0.7443 |
| 0.5077 | 42.42 | 8400 | 0.5422 | 0.7472 | 0.7494 |
| 0.4974 | 43.43 | 8600 | 0.5435 | 0.7464 | 0.7481 |
| 0.4987 | 44.44 | 8800 | 0.5445 | 0.7446 | 0.7462 |
| 0.4971 | 45.45 | 9000 | 0.5449 | 0.7468 | 0.7484 |
| 0.499 | 46.46 | 9200 | 0.5438 | 0.7470 | 0.7487 |
| 0.503 | 47.47 | 9400 | 0.5428 | 0.7437 | 0.7456 |
| 0.4927 | 48.48 | 9600 | 0.5450 | 0.7463 | 0.7481 |
| 0.5038 | 49.49 | 9800 | 0.5438 | 0.7460 | 0.7478 |
| 0.4975 | 50.51 | 10000 | 0.5441 | 0.7454 | 0.7472 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H3K4me1-seqsight_65536_512_47M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me1-seqsight_65536_512_47M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:29:21+00:00 |
text-generation | transformers | {} | Zekunli/Llama-2-7b-mlp2x-mmncphysical-text-only-v2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:29:25+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K36me3-seqsight_65536_512_47M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H3K36me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K36me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5000
- F1 Score: 0.7751
- Accuracy: 0.7769
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5805 | 0.92 | 200 | 0.5475 | 0.7329 | 0.7351 |
| 0.533 | 1.83 | 400 | 0.5360 | 0.7384 | 0.7411 |
| 0.5243 | 2.75 | 600 | 0.5261 | 0.7465 | 0.7483 |
| 0.5216 | 3.67 | 800 | 0.5199 | 0.7555 | 0.7563 |
| 0.5103 | 4.59 | 1000 | 0.5198 | 0.7563 | 0.7583 |
| 0.5074 | 5.5 | 1200 | 0.5137 | 0.7593 | 0.7615 |
| 0.5047 | 6.42 | 1400 | 0.5086 | 0.7731 | 0.7738 |
| 0.5017 | 7.34 | 1600 | 0.5109 | 0.7695 | 0.7712 |
| 0.4951 | 8.26 | 1800 | 0.5114 | 0.7696 | 0.7718 |
| 0.499 | 9.17 | 2000 | 0.5101 | 0.7674 | 0.7701 |
| 0.4968 | 10.09 | 2200 | 0.5107 | 0.7670 | 0.7704 |
| 0.4928 | 11.01 | 2400 | 0.5085 | 0.7655 | 0.7689 |
| 0.4914 | 11.93 | 2600 | 0.5024 | 0.7741 | 0.7764 |
| 0.4898 | 12.84 | 2800 | 0.5021 | 0.7707 | 0.7732 |
| 0.4886 | 13.76 | 3000 | 0.5087 | 0.7676 | 0.7709 |
| 0.4853 | 14.68 | 3200 | 0.4988 | 0.7759 | 0.7775 |
| 0.489 | 15.6 | 3400 | 0.5080 | 0.7675 | 0.7712 |
| 0.4866 | 16.51 | 3600 | 0.5003 | 0.7750 | 0.7769 |
| 0.4851 | 17.43 | 3800 | 0.4924 | 0.7816 | 0.7830 |
| 0.4856 | 18.35 | 4000 | 0.4995 | 0.7763 | 0.7787 |
| 0.4816 | 19.27 | 4200 | 0.4990 | 0.7754 | 0.7775 |
| 0.4845 | 20.18 | 4400 | 0.5034 | 0.7717 | 0.7749 |
| 0.4832 | 21.1 | 4600 | 0.4975 | 0.7765 | 0.7787 |
| 0.4828 | 22.02 | 4800 | 0.5014 | 0.7756 | 0.7778 |
| 0.4829 | 22.94 | 5000 | 0.4969 | 0.7744 | 0.7769 |
| 0.4803 | 23.85 | 5200 | 0.4996 | 0.7732 | 0.7761 |
| 0.4788 | 24.77 | 5400 | 0.5065 | 0.7725 | 0.7758 |
| 0.4817 | 25.69 | 5600 | 0.5004 | 0.7760 | 0.7784 |
| 0.4796 | 26.61 | 5800 | 0.4973 | 0.7755 | 0.7778 |
| 0.4758 | 27.52 | 6000 | 0.5100 | 0.7729 | 0.7764 |
| 0.4787 | 28.44 | 6200 | 0.5018 | 0.7717 | 0.7747 |
| 0.4762 | 29.36 | 6400 | 0.5042 | 0.7713 | 0.7747 |
| 0.4794 | 30.28 | 6600 | 0.5040 | 0.7725 | 0.7758 |
| 0.4762 | 31.19 | 6800 | 0.4930 | 0.7812 | 0.7827 |
| 0.476 | 32.11 | 7000 | 0.4992 | 0.7733 | 0.7764 |
| 0.4767 | 33.03 | 7200 | 0.5005 | 0.7742 | 0.7769 |
| 0.4753 | 33.94 | 7400 | 0.5002 | 0.7756 | 0.7781 |
| 0.4756 | 34.86 | 7600 | 0.4983 | 0.7750 | 0.7778 |
| 0.4743 | 35.78 | 7800 | 0.4978 | 0.7738 | 0.7767 |
| 0.476 | 36.7 | 8000 | 0.4983 | 0.7744 | 0.7772 |
| 0.4736 | 37.61 | 8200 | 0.5032 | 0.7712 | 0.7747 |
| 0.4758 | 38.53 | 8400 | 0.4928 | 0.7799 | 0.7818 |
| 0.4734 | 39.45 | 8600 | 0.4986 | 0.7745 | 0.7772 |
| 0.4725 | 40.37 | 8800 | 0.5023 | 0.7729 | 0.7761 |
| 0.4773 | 41.28 | 9000 | 0.4986 | 0.7734 | 0.7764 |
| 0.4743 | 42.2 | 9200 | 0.4955 | 0.7774 | 0.7798 |
| 0.4721 | 43.12 | 9400 | 0.4984 | 0.7755 | 0.7781 |
| 0.4744 | 44.04 | 9600 | 0.4979 | 0.7750 | 0.7778 |
| 0.4732 | 44.95 | 9800 | 0.5005 | 0.7721 | 0.7752 |
| 0.4742 | 45.87 | 10000 | 0.4987 | 0.7755 | 0.7784 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H3K36me3-seqsight_65536_512_47M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K36me3-seqsight_65536_512_47M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:29:45+00:00 |
text-generation | transformers | # dareties
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using ./models2/Meta-Llama-3-8B-instruct as a base.
### Models Merged
The following models were included in the merge:
* ./models2/Llama-3-Kafka-8B-v0.1
* [NousResearch/Hermes-2-Pro-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B)
* ./models2/Llama3_DiscoLM_German_8b_v0.1_experimental
* ./models2/Llama-3-SauerkrautLM-8b-Instruct
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: ./models2/Meta-Llama-3-8B-instruct
# No parameters necessary for base model
- model: ./models2/Llama-3-SauerkrautLM-8b-Instruct
parameters:
density: 0.6
weight: 0.25
- model: ./models2/Llama3_DiscoLM_German_8b_v0.1_experimental
parameters:
density: 0.6
weight: 0.25
- model: ./models2/Llama-3-Kafka-8B-v0.1
parameters:
density: 0.6
weight: 0.25
- model: NousResearch/Hermes-2-Pro-Llama-3-8B
parameters:
density: 0.6
weight: 0.25
merge_method: dare_ties
base_model: ./models2/Meta-Llama-3-8B-instruct
tokenizer_source: model:NousResearch/Hermes-2-Pro-Llama-3-8B
parameters:
int8_mask: true
dtype: bfloat16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["NousResearch/Hermes-2-Pro-Llama-3-8B"]} | johannhartmann/llama8_dt_b | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:NousResearch/Hermes-2-Pro-Llama-3-8B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:30:05+00:00 |
null | null | {} | Gokulapriyan9677/videomae-base-finetuned-ucf101-subset | null | [
"region:us"
] | null | 2024-05-03T16:30:15+00:00 |
|
object-detection | transformers | {} | nsugianto/tabletransstructrecog_finetuned_pubt1m_lstabletransstrucrecogv1_session6 | null | [
"transformers",
"tensorboard",
"safetensors",
"table-transformer",
"object-detection",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:30:25+00:00 |
|
null | transformers |
# Uploaded model
- **Developed by:** johannoriel
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | johannoriel/medllama_model | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:30:46+00:00 |
null | null | This model is finetuned with an ML Q&A dataset - hanyueshf/ml-arxiv-papers-qa. It outperforms both its base Llama-2-7B-Chat and Llama-3-8B-Instruct, as shown in below figure.
Note: improvement = (finetuned_llama2_scores - base_llama2_scores) / base_llama2_scores.
The finetuning code is available on Github at https://github.com/hanyuesgithub/QA-ml-arxiv-papers.

| {} | hanyueshf/llama-2-7b-chat-ml-qa | null | [
"region:us"
] | null | 2024-05-03T16:31:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine_tuned_boolq_bert
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5736
- Accuracy: 0.7222
- F1: 0.7325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|
| 0.6443 | 4.1667 | 50 | 0.5606 | 0.7778 | 0.6806 |
| 0.3932 | 8.3333 | 100 | 0.6016 | 0.6111 | 0.6255 |
| 0.126 | 12.5 | 150 | 1.0887 | 0.5 | 0.5418 |
| 0.0166 | 16.6667 | 200 | 1.5543 | 0.5556 | 0.5829 |
| 0.0041 | 20.8333 | 250 | 1.5032 | 0.7222 | 0.7325 |
| 0.0022 | 25.0 | 300 | 1.7354 | 0.6667 | 0.6872 |
| 0.0018 | 29.1667 | 350 | 1.5756 | 0.6667 | 0.6667 |
| 0.0016 | 33.3333 | 400 | 1.5736 | 0.7222 | 0.7325 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "google-bert/bert-base-uncased", "model-index": [{"name": "fine_tuned_boolq_bert", "results": []}]} | lenatr99/fine_tuned_boolq_bert | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:31:44+00:00 |
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="TeoGal/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| {"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]} | TeoGal/q-FrozenLake-v1-4x4-noSlippery | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null | 2024-05-03T16:32:44+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | golf2248/mdxmtky | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:32:49+00:00 |
null | transformers |
# LLOROv2 - GGUF VERSION
Este repositรณrio contรฉm o modelo Llorov2 de 7B de parรขmetros em formato GGUF, na versรฃo 16 bits e tambรฉm nas versรฃo quantizada de 8 bits.
Lloro, desenvolvido pelos Laboratรณrios de Pesquisa Semantix, รฉ um Modelo de Linguagem que foi treinado para realizar efetivamente Anรกlise de Dados em Portuguรชs no Python. ร uma versรฃo aprimorada de codellama/CodeLlama-7b-Instruct-hf, que foi treinado em conjuntos de dados sintรฉticos. O processo de aprimoramento foi realizado usando a metodologia QLORA em uma GPU V100 com 16 GB de RAM.
Acesse o [site](https://semantix.ai/conheca-o-lloro-o-primeiro-modelo-de-ia-expert-em-analise-de-dados-100-brasileiro/) para mais informaรงรตes sobre o Lloro.
# Sobre o formato GGUF
O modelo no formato GGUF permite seu uso para inferรชncia usando o llama.cpp, permitindo tanto o uso de CPU como de GPU, e outras bibliotecas e ferramentas compatรญveis, como:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [LM Studio](https://lmstudio.ai/)
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
* [ctransformers](https://github.com/marella/ctransformers)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
## Detalhes do Modelo
- **Modelo Base:** CodeLlama-7b-Instruct-hf
- **Dataset de Treinamento:** Synthetic
- **Idioma:** Portuguรชs
## Contribuiรงรตes
Contribuiรงรตes para a melhoria deste modelo sรฃo bem-vindas. Sinta-se ร vontade para abrir problemas e solicitaรงรตes pull. | {"language": ["pt"], "license": "llama2", "library_name": "transformers", "tags": ["LLM", "Portuguese", "Lloro", "Llama 2", "Q&A"], "datasets": ["semantixai/Test-Dataset-Lloro"], "base_model": "codellama/CodeLlama-7b-Instruct-hf"} | anaxsouza/llorov2-gguf | null | [
"transformers",
"gguf",
"LLM",
"Portuguese",
"Lloro",
"Llama 2",
"Q&A",
"pt",
"dataset:semantixai/Test-Dataset-Lloro",
"base_model:codellama/CodeLlama-7b-Instruct-hf",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:33:25+00:00 |
null | null | {"license": "cc-by-4.0"} | unileon-robotics/home-objects | null | [
"license:cc-by-4.0",
"region:us"
] | null | 2024-05-03T16:34:43+00:00 |
|
text-generation | transformers | {} | Yeobin/llama-3-8b-math | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:34:49+00:00 |
|
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft-fsi
This model is a fine-tuned version of [dynamofl/dynamo-1.6B-v0.4-mosaic-dynamoDPO-iter0-2978](https://huggingface.co/dynamofl/dynamo-1.6B-v0.4-mosaic-dynamoDPO-iter0-2978) on the dynamofl/train-default-FSI-PersonalFinancialAdvice-input-formatted-chatml dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7006
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 5.8867 | 0.5333 | 1 | 5.7356 |
| 5.8867 | 1.6 | 3 | 2.7050 |
| 3.7066 | 2.6667 | 5 | 1.9496 |
| 3.7066 | 3.7333 | 7 | 1.5918 |
| 3.7066 | 4.8 | 9 | 1.3194 |
| 1.6243 | 5.8667 | 11 | 1.0055 |
| 1.6243 | 6.9333 | 13 | 0.8459 |
| 0.9667 | 8.0 | 15 | 0.7559 |
| 0.9667 | 8.5333 | 16 | 0.7331 |
| 0.9667 | 9.6 | 18 | 0.7034 |
| 0.7508 | 10.6667 | 20 | 0.7006 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer", "trl", "sft", "generated_from_trainer"], "datasets": ["dynamofl/train-default-FSI-PersonalFinancialAdvice-input-formatted-chatml"], "base_model": "dynamofl/dynamo-1.6B-v0.4-mosaic-dynamoDPO-iter0-2978", "model-index": [{"name": "sft-fsi", "results": []}]} | jamesoneill12/sft-fsi | null | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:dynamofl/train-default-FSI-PersonalFinancialAdvice-input-formatted-chatml",
"base_model:dynamofl/dynamo-1.6B-v0.4-mosaic-dynamoDPO-iter0-2978",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:34:49+00:00 |
null | null | {} | vinven7/Llama2-ft-CrystalStructure | null | [
"region:us"
] | null | 2024-05-03T16:35:07+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K36me3-seqsight_65536_512_47M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H3K36me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K36me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4908
- F1 Score: 0.7777
- Accuracy: 0.7790
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5667 | 0.92 | 200 | 0.5382 | 0.7367 | 0.7397 |
| 0.5207 | 1.83 | 400 | 0.5229 | 0.7512 | 0.7537 |
| 0.5071 | 2.75 | 600 | 0.5121 | 0.7597 | 0.7618 |
| 0.505 | 3.67 | 800 | 0.5060 | 0.7706 | 0.7724 |
| 0.4938 | 4.59 | 1000 | 0.5047 | 0.7715 | 0.7735 |
| 0.4908 | 5.5 | 1200 | 0.5033 | 0.7695 | 0.7724 |
| 0.4881 | 6.42 | 1400 | 0.4938 | 0.7790 | 0.7801 |
| 0.4832 | 7.34 | 1600 | 0.5067 | 0.7749 | 0.7775 |
| 0.4775 | 8.26 | 1800 | 0.4963 | 0.7822 | 0.7838 |
| 0.4815 | 9.17 | 2000 | 0.4922 | 0.7798 | 0.7815 |
| 0.478 | 10.09 | 2200 | 0.5153 | 0.7620 | 0.7675 |
| 0.4733 | 11.01 | 2400 | 0.4963 | 0.7765 | 0.7801 |
| 0.4707 | 11.93 | 2600 | 0.4824 | 0.7864 | 0.7873 |
| 0.4677 | 12.84 | 2800 | 0.4894 | 0.7764 | 0.7792 |
| 0.4668 | 13.76 | 3000 | 0.5010 | 0.7719 | 0.7761 |
| 0.4613 | 14.68 | 3200 | 0.4897 | 0.7811 | 0.7833 |
| 0.4644 | 15.6 | 3400 | 0.4857 | 0.7769 | 0.7795 |
| 0.463 | 16.51 | 3600 | 0.4989 | 0.7776 | 0.7807 |
| 0.4594 | 17.43 | 3800 | 0.4825 | 0.7863 | 0.7878 |
| 0.4594 | 18.35 | 4000 | 0.4870 | 0.7809 | 0.7833 |
| 0.4559 | 19.27 | 4200 | 0.4896 | 0.7829 | 0.7850 |
| 0.4579 | 20.18 | 4400 | 0.4996 | 0.7734 | 0.7772 |
| 0.4552 | 21.1 | 4600 | 0.4861 | 0.7824 | 0.7847 |
| 0.4564 | 22.02 | 4800 | 0.4899 | 0.7840 | 0.7861 |
| 0.4525 | 22.94 | 5000 | 0.4892 | 0.7759 | 0.7792 |
| 0.4504 | 23.85 | 5200 | 0.4890 | 0.7818 | 0.7847 |
| 0.4467 | 24.77 | 5400 | 0.5002 | 0.7733 | 0.7775 |
| 0.4512 | 25.69 | 5600 | 0.4926 | 0.7807 | 0.7835 |
| 0.4492 | 26.61 | 5800 | 0.4851 | 0.7833 | 0.7856 |
| 0.4436 | 27.52 | 6000 | 0.5050 | 0.7786 | 0.7821 |
| 0.4465 | 28.44 | 6200 | 0.4897 | 0.7824 | 0.7853 |
| 0.4451 | 29.36 | 6400 | 0.4890 | 0.7758 | 0.7792 |
| 0.4446 | 30.28 | 6600 | 0.4969 | 0.7771 | 0.7810 |
| 0.4429 | 31.19 | 6800 | 0.4843 | 0.7854 | 0.7876 |
| 0.441 | 32.11 | 7000 | 0.4919 | 0.7806 | 0.7838 |
| 0.4424 | 33.03 | 7200 | 0.4934 | 0.7819 | 0.7850 |
| 0.4413 | 33.94 | 7400 | 0.4864 | 0.7825 | 0.7850 |
| 0.4409 | 34.86 | 7600 | 0.4901 | 0.7825 | 0.7853 |
| 0.4398 | 35.78 | 7800 | 0.4866 | 0.7823 | 0.7847 |
| 0.4412 | 36.7 | 8000 | 0.4897 | 0.7805 | 0.7835 |
| 0.4369 | 37.61 | 8200 | 0.4985 | 0.7776 | 0.7815 |
| 0.4408 | 38.53 | 8400 | 0.4874 | 0.7825 | 0.7853 |
| 0.4359 | 39.45 | 8600 | 0.4935 | 0.7800 | 0.7833 |
| 0.4366 | 40.37 | 8800 | 0.4989 | 0.7804 | 0.7838 |
| 0.4396 | 41.28 | 9000 | 0.4934 | 0.7810 | 0.7844 |
| 0.4359 | 42.2 | 9200 | 0.4899 | 0.7829 | 0.7858 |
| 0.4332 | 43.12 | 9400 | 0.4930 | 0.7831 | 0.7861 |
| 0.4371 | 44.04 | 9600 | 0.4909 | 0.7831 | 0.7861 |
| 0.4348 | 44.95 | 9800 | 0.4961 | 0.7813 | 0.7847 |
| 0.4362 | 45.87 | 10000 | 0.4933 | 0.7830 | 0.7861 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H3K36me3-seqsight_65536_512_47M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K36me3-seqsight_65536_512_47M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:35:35+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_mouse_0-seqsight_65536_512_47M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_mouse_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5918
- F1 Score: 0.6924
- Accuracy: 0.6926
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.6506 | 3.92 | 200 | 0.6065 | 0.6652 | 0.6691 |
| 0.6137 | 7.84 | 400 | 0.5919 | 0.6697 | 0.6716 |
| 0.6024 | 11.76 | 600 | 0.5871 | 0.6856 | 0.6864 |
| 0.5927 | 15.69 | 800 | 0.5826 | 0.6963 | 0.6963 |
| 0.5819 | 19.61 | 1000 | 0.5840 | 0.6903 | 0.6914 |
| 0.5778 | 23.53 | 1200 | 0.5791 | 0.7016 | 0.7025 |
| 0.5674 | 27.45 | 1400 | 0.5823 | 0.6957 | 0.6963 |
| 0.5655 | 31.37 | 1600 | 0.5750 | 0.7000 | 0.7 |
| 0.5593 | 35.29 | 1800 | 0.5737 | 0.7037 | 0.7037 |
| 0.5532 | 39.22 | 2000 | 0.5797 | 0.6943 | 0.6951 |
| 0.5519 | 43.14 | 2200 | 0.5756 | 0.6985 | 0.6988 |
| 0.5508 | 47.06 | 2400 | 0.5696 | 0.7145 | 0.7148 |
| 0.5412 | 50.98 | 2600 | 0.5818 | 0.6816 | 0.6827 |
| 0.5392 | 54.9 | 2800 | 0.5716 | 0.7124 | 0.7136 |
| 0.5385 | 58.82 | 3000 | 0.5700 | 0.7047 | 0.7049 |
| 0.5367 | 62.75 | 3200 | 0.5681 | 0.7013 | 0.7012 |
| 0.5329 | 66.67 | 3400 | 0.5713 | 0.6976 | 0.6975 |
| 0.5304 | 70.59 | 3600 | 0.5742 | 0.7010 | 0.7012 |
| 0.5282 | 74.51 | 3800 | 0.5724 | 0.6975 | 0.6975 |
| 0.5279 | 78.43 | 4000 | 0.5690 | 0.6988 | 0.6988 |
| 0.5261 | 82.35 | 4200 | 0.5696 | 0.6998 | 0.7 |
| 0.5231 | 86.27 | 4400 | 0.5752 | 0.6985 | 0.6988 |
| 0.5214 | 90.2 | 4600 | 0.5722 | 0.7013 | 0.7012 |
| 0.5186 | 94.12 | 4800 | 0.5787 | 0.6983 | 0.6988 |
| 0.5183 | 98.04 | 5000 | 0.5680 | 0.7085 | 0.7086 |
| 0.5154 | 101.96 | 5200 | 0.5731 | 0.7036 | 0.7037 |
| 0.514 | 105.88 | 5400 | 0.5663 | 0.7121 | 0.7123 |
| 0.5163 | 109.8 | 5600 | 0.5668 | 0.7148 | 0.7148 |
| 0.5144 | 113.73 | 5800 | 0.5673 | 0.7184 | 0.7185 |
| 0.5153 | 117.65 | 6000 | 0.5702 | 0.7037 | 0.7037 |
| 0.5117 | 121.57 | 6200 | 0.5666 | 0.7161 | 0.7160 |
| 0.5099 | 125.49 | 6400 | 0.5759 | 0.7047 | 0.7049 |
| 0.5131 | 129.41 | 6600 | 0.5691 | 0.7112 | 0.7111 |
| 0.5109 | 133.33 | 6800 | 0.5681 | 0.7124 | 0.7123 |
| 0.5097 | 137.25 | 7000 | 0.5711 | 0.7086 | 0.7086 |
| 0.5056 | 141.18 | 7200 | 0.5727 | 0.7112 | 0.7111 |
| 0.5074 | 145.1 | 7400 | 0.5751 | 0.7060 | 0.7062 |
| 0.5065 | 149.02 | 7600 | 0.5696 | 0.7136 | 0.7136 |
| 0.5063 | 152.94 | 7800 | 0.5720 | 0.7099 | 0.7099 |
| 0.5041 | 156.86 | 8000 | 0.5691 | 0.7112 | 0.7111 |
| 0.5059 | 160.78 | 8200 | 0.5727 | 0.7099 | 0.7099 |
| 0.5064 | 164.71 | 8400 | 0.5724 | 0.7111 | 0.7111 |
| 0.5049 | 168.63 | 8600 | 0.5717 | 0.7099 | 0.7099 |
| 0.5042 | 172.55 | 8800 | 0.5697 | 0.7149 | 0.7148 |
| 0.5008 | 176.47 | 9000 | 0.5716 | 0.7149 | 0.7148 |
| 0.4982 | 180.39 | 9200 | 0.5729 | 0.7136 | 0.7136 |
| 0.4989 | 184.31 | 9400 | 0.5742 | 0.7099 | 0.7099 |
| 0.4989 | 188.24 | 9600 | 0.5733 | 0.7111 | 0.7111 |
| 0.504 | 192.16 | 9800 | 0.5722 | 0.7124 | 0.7123 |
| 0.4969 | 196.08 | 10000 | 0.5727 | 0.7111 | 0.7111 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_mouse_0-seqsight_65536_512_47M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_mouse_0-seqsight_65536_512_47M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:35:35+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K36me3-seqsight_65536_512_47M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_65536_512_47M](https://huggingface.co/mahdibaghbanzadeh/seqsight_65536_512_47M) on the [mahdibaghbanzadeh/GUE_EMP_H3K36me3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K36me3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5059
- F1 Score: 0.7826
- Accuracy: 0.7850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5563 | 0.92 | 200 | 0.5396 | 0.7412 | 0.7451 |
| 0.5106 | 1.83 | 400 | 0.5126 | 0.7627 | 0.7655 |
| 0.4971 | 2.75 | 600 | 0.5045 | 0.7687 | 0.7709 |
| 0.4962 | 3.67 | 800 | 0.4940 | 0.7739 | 0.7752 |
| 0.4826 | 4.59 | 1000 | 0.4927 | 0.7786 | 0.7801 |
| 0.4764 | 5.5 | 1200 | 0.4988 | 0.7700 | 0.7738 |
| 0.4737 | 6.42 | 1400 | 0.4823 | 0.7788 | 0.7798 |
| 0.4665 | 7.34 | 1600 | 0.4999 | 0.7720 | 0.7752 |
| 0.4582 | 8.26 | 1800 | 0.4894 | 0.7817 | 0.7830 |
| 0.4631 | 9.17 | 2000 | 0.4824 | 0.7819 | 0.7833 |
| 0.4574 | 10.09 | 2200 | 0.5118 | 0.7611 | 0.7663 |
| 0.4517 | 11.01 | 2400 | 0.4879 | 0.7811 | 0.7841 |
| 0.4455 | 11.93 | 2600 | 0.4735 | 0.7886 | 0.7893 |
| 0.4428 | 12.84 | 2800 | 0.4840 | 0.7766 | 0.7798 |
| 0.4382 | 13.76 | 3000 | 0.5024 | 0.7712 | 0.7761 |
| 0.4323 | 14.68 | 3200 | 0.4920 | 0.7870 | 0.7893 |
| 0.4336 | 15.6 | 3400 | 0.4818 | 0.7847 | 0.7876 |
| 0.4307 | 16.51 | 3600 | 0.4985 | 0.7775 | 0.7807 |
| 0.4247 | 17.43 | 3800 | 0.4830 | 0.7893 | 0.7910 |
| 0.4217 | 18.35 | 4000 | 0.4860 | 0.7895 | 0.7913 |
| 0.4195 | 19.27 | 4200 | 0.5007 | 0.7898 | 0.7919 |
| 0.4169 | 20.18 | 4400 | 0.5000 | 0.7856 | 0.7884 |
| 0.4126 | 21.1 | 4600 | 0.4923 | 0.7923 | 0.7939 |
| 0.4123 | 22.02 | 4800 | 0.4984 | 0.7824 | 0.7850 |
| 0.4059 | 22.94 | 5000 | 0.4944 | 0.7830 | 0.7861 |
| 0.4024 | 23.85 | 5200 | 0.4952 | 0.7799 | 0.7830 |
| 0.399 | 24.77 | 5400 | 0.5144 | 0.7755 | 0.7798 |
| 0.3994 | 25.69 | 5600 | 0.5067 | 0.7887 | 0.7913 |
| 0.3975 | 26.61 | 5800 | 0.4957 | 0.7832 | 0.7858 |
| 0.3908 | 27.52 | 6000 | 0.5261 | 0.7785 | 0.7818 |
| 0.392 | 28.44 | 6200 | 0.4996 | 0.7841 | 0.7867 |
| 0.3887 | 29.36 | 6400 | 0.5025 | 0.7812 | 0.7844 |
| 0.3849 | 30.28 | 6600 | 0.5128 | 0.7742 | 0.7781 |
| 0.3824 | 31.19 | 6800 | 0.5134 | 0.7831 | 0.7858 |
| 0.3811 | 32.11 | 7000 | 0.5071 | 0.7820 | 0.7847 |
| 0.3815 | 33.03 | 7200 | 0.5100 | 0.7833 | 0.7858 |
| 0.3787 | 33.94 | 7400 | 0.5030 | 0.7860 | 0.7881 |
| 0.3754 | 34.86 | 7600 | 0.5084 | 0.7831 | 0.7858 |
| 0.373 | 35.78 | 7800 | 0.5119 | 0.7851 | 0.7876 |
| 0.3724 | 36.7 | 8000 | 0.5201 | 0.7838 | 0.7870 |
| 0.3728 | 37.61 | 8200 | 0.5250 | 0.7716 | 0.7758 |
| 0.3755 | 38.53 | 8400 | 0.5147 | 0.7827 | 0.7853 |
| 0.3679 | 39.45 | 8600 | 0.5214 | 0.7767 | 0.7804 |
| 0.3668 | 40.37 | 8800 | 0.5325 | 0.7759 | 0.7795 |
| 0.3704 | 41.28 | 9000 | 0.5190 | 0.7773 | 0.7807 |
| 0.3628 | 42.2 | 9200 | 0.5186 | 0.7821 | 0.7850 |
| 0.3624 | 43.12 | 9400 | 0.5233 | 0.7824 | 0.7853 |
| 0.3623 | 44.04 | 9600 | 0.5235 | 0.7788 | 0.7821 |
| 0.3628 | 44.95 | 9800 | 0.5291 | 0.7771 | 0.7807 |
| 0.3665 | 45.87 | 10000 | 0.5250 | 0.7809 | 0.7841 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_65536_512_47M", "model-index": [{"name": "GUE_EMP_H3K36me3-seqsight_65536_512_47M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K36me3-seqsight_65536_512_47M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_65536_512_47M",
"region:us"
] | null | 2024-05-03T16:35:35+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_notata-seqsight_4096_512_15M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_notata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1181
- F1 Score: 0.9561
- Accuracy: 0.9561
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.2897 | 0.6 | 200 | 0.1532 | 0.9391 | 0.9391 |
| 0.1588 | 1.2 | 400 | 0.1377 | 0.9472 | 0.9472 |
| 0.1482 | 1.81 | 600 | 0.1222 | 0.9521 | 0.9521 |
| 0.1344 | 2.41 | 800 | 0.1194 | 0.9534 | 0.9535 |
| 0.1341 | 3.01 | 1000 | 0.1171 | 0.9550 | 0.9550 |
| 0.1298 | 3.61 | 1200 | 0.1143 | 0.9559 | 0.9559 |
| 0.1276 | 4.22 | 1400 | 0.1130 | 0.9550 | 0.9550 |
| 0.1226 | 4.82 | 1600 | 0.1121 | 0.9572 | 0.9572 |
| 0.1228 | 5.42 | 1800 | 0.1124 | 0.9565 | 0.9565 |
| 0.1177 | 6.02 | 2000 | 0.1118 | 0.9540 | 0.9540 |
| 0.1176 | 6.63 | 2200 | 0.1085 | 0.9582 | 0.9582 |
| 0.1165 | 7.23 | 2400 | 0.1094 | 0.9585 | 0.9585 |
| 0.1131 | 7.83 | 2600 | 0.1084 | 0.9582 | 0.9582 |
| 0.1148 | 8.43 | 2800 | 0.1112 | 0.9576 | 0.9576 |
| 0.1169 | 9.04 | 3000 | 0.1116 | 0.9580 | 0.9580 |
| 0.1117 | 9.64 | 3200 | 0.1169 | 0.9550 | 0.9550 |
| 0.1118 | 10.24 | 3400 | 0.1079 | 0.9593 | 0.9593 |
| 0.1096 | 10.84 | 3600 | 0.1060 | 0.9582 | 0.9582 |
| 0.1065 | 11.45 | 3800 | 0.1081 | 0.9597 | 0.9597 |
| 0.1082 | 12.05 | 4000 | 0.1082 | 0.9585 | 0.9585 |
| 0.1083 | 12.65 | 4200 | 0.1047 | 0.9606 | 0.9606 |
| 0.1077 | 13.25 | 4400 | 0.1059 | 0.9600 | 0.9601 |
| 0.1055 | 13.86 | 4600 | 0.1057 | 0.9597 | 0.9597 |
| 0.1076 | 14.46 | 4800 | 0.1057 | 0.9587 | 0.9587 |
| 0.1028 | 15.06 | 5000 | 0.1041 | 0.9585 | 0.9585 |
| 0.1048 | 15.66 | 5200 | 0.1030 | 0.9597 | 0.9597 |
| 0.1058 | 16.27 | 5400 | 0.1035 | 0.9612 | 0.9612 |
| 0.1016 | 16.87 | 5600 | 0.1028 | 0.9608 | 0.9608 |
| 0.1014 | 17.47 | 5800 | 0.1027 | 0.9604 | 0.9604 |
| 0.1062 | 18.07 | 6000 | 0.1010 | 0.9616 | 0.9616 |
| 0.1016 | 18.67 | 6200 | 0.1018 | 0.9595 | 0.9595 |
| 0.1031 | 19.28 | 6400 | 0.1016 | 0.9606 | 0.9606 |
| 0.097 | 19.88 | 6600 | 0.1047 | 0.9615 | 0.9616 |
| 0.1034 | 20.48 | 6800 | 0.1034 | 0.9608 | 0.9608 |
| 0.0985 | 21.08 | 7000 | 0.1016 | 0.9614 | 0.9614 |
| 0.096 | 21.69 | 7200 | 0.1030 | 0.9612 | 0.9612 |
| 0.098 | 22.29 | 7400 | 0.1037 | 0.9601 | 0.9601 |
| 0.0999 | 22.89 | 7600 | 0.1004 | 0.9616 | 0.9616 |
| 0.097 | 23.49 | 7800 | 0.1019 | 0.9616 | 0.9616 |
| 0.0988 | 24.1 | 8000 | 0.1010 | 0.9614 | 0.9614 |
| 0.0962 | 24.7 | 8200 | 0.1023 | 0.9608 | 0.9608 |
| 0.0973 | 25.3 | 8400 | 0.1015 | 0.9612 | 0.9612 |
| 0.0944 | 25.9 | 8600 | 0.1014 | 0.9614 | 0.9614 |
| 0.0974 | 26.51 | 8800 | 0.1006 | 0.9619 | 0.9619 |
| 0.0968 | 27.11 | 9000 | 0.1001 | 0.9616 | 0.9616 |
| 0.0971 | 27.71 | 9200 | 0.1002 | 0.9614 | 0.9614 |
| 0.0929 | 28.31 | 9400 | 0.1005 | 0.9612 | 0.9612 |
| 0.0997 | 28.92 | 9600 | 0.1000 | 0.9619 | 0.9619 |
| 0.0931 | 29.52 | 9800 | 0.1003 | 0.9617 | 0.9617 |
| 0.0973 | 30.12 | 10000 | 0.1002 | 0.9614 | 0.9614 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_prom_prom_300_notata-seqsight_4096_512_15M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_notata-seqsight_4096_512_15M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T16:35:47+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_notata-seqsight_4096_512_15M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_notata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1217
- F1 Score: 0.9582
- Accuracy: 0.9582
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.2508 | 0.6 | 200 | 0.1359 | 0.9487 | 0.9487 |
| 0.1422 | 1.2 | 400 | 0.1312 | 0.9479 | 0.9480 |
| 0.1383 | 1.81 | 600 | 0.1152 | 0.9548 | 0.9548 |
| 0.1266 | 2.41 | 800 | 0.1144 | 0.9572 | 0.9572 |
| 0.1273 | 3.01 | 1000 | 0.1097 | 0.9589 | 0.9589 |
| 0.1216 | 3.61 | 1200 | 0.1115 | 0.9591 | 0.9591 |
| 0.1195 | 4.22 | 1400 | 0.1070 | 0.9599 | 0.9599 |
| 0.1141 | 4.82 | 1600 | 0.1052 | 0.9597 | 0.9597 |
| 0.1124 | 5.42 | 1800 | 0.1060 | 0.9589 | 0.9589 |
| 0.1089 | 6.02 | 2000 | 0.1047 | 0.9585 | 0.9585 |
| 0.1082 | 6.63 | 2200 | 0.1061 | 0.9589 | 0.9589 |
| 0.1055 | 7.23 | 2400 | 0.1037 | 0.9591 | 0.9591 |
| 0.1022 | 7.83 | 2600 | 0.1004 | 0.9591 | 0.9591 |
| 0.1037 | 8.43 | 2800 | 0.1056 | 0.9617 | 0.9617 |
| 0.1061 | 9.04 | 3000 | 0.1053 | 0.9600 | 0.9601 |
| 0.0992 | 9.64 | 3200 | 0.1105 | 0.9584 | 0.9584 |
| 0.0982 | 10.24 | 3400 | 0.1002 | 0.9631 | 0.9631 |
| 0.0983 | 10.84 | 3600 | 0.0987 | 0.9608 | 0.9608 |
| 0.0929 | 11.45 | 3800 | 0.1052 | 0.9584 | 0.9584 |
| 0.0951 | 12.05 | 4000 | 0.1039 | 0.9608 | 0.9608 |
| 0.0946 | 12.65 | 4200 | 0.0979 | 0.9625 | 0.9625 |
| 0.0942 | 13.25 | 4400 | 0.1007 | 0.9625 | 0.9625 |
| 0.0916 | 13.86 | 4600 | 0.1005 | 0.9627 | 0.9627 |
| 0.0927 | 14.46 | 4800 | 0.1026 | 0.9612 | 0.9612 |
| 0.09 | 15.06 | 5000 | 0.1008 | 0.9623 | 0.9623 |
| 0.0889 | 15.66 | 5200 | 0.1002 | 0.9631 | 0.9631 |
| 0.0896 | 16.27 | 5400 | 0.1003 | 0.9631 | 0.9631 |
| 0.0856 | 16.87 | 5600 | 0.0988 | 0.9625 | 0.9625 |
| 0.0853 | 17.47 | 5800 | 0.1013 | 0.9606 | 0.9606 |
| 0.089 | 18.07 | 6000 | 0.0973 | 0.9631 | 0.9631 |
| 0.0842 | 18.67 | 6200 | 0.0977 | 0.9644 | 0.9644 |
| 0.0855 | 19.28 | 6400 | 0.1026 | 0.9627 | 0.9627 |
| 0.0804 | 19.88 | 6600 | 0.1034 | 0.9612 | 0.9612 |
| 0.0855 | 20.48 | 6800 | 0.1006 | 0.9640 | 0.9640 |
| 0.0813 | 21.08 | 7000 | 0.1014 | 0.9634 | 0.9634 |
| 0.0791 | 21.69 | 7200 | 0.1038 | 0.9606 | 0.9606 |
| 0.0798 | 22.29 | 7400 | 0.1026 | 0.9623 | 0.9623 |
| 0.0816 | 22.89 | 7600 | 0.0986 | 0.9629 | 0.9629 |
| 0.0788 | 23.49 | 7800 | 0.1031 | 0.9633 | 0.9633 |
| 0.0791 | 24.1 | 8000 | 0.1003 | 0.9636 | 0.9636 |
| 0.078 | 24.7 | 8200 | 0.1035 | 0.9632 | 0.9633 |
| 0.0779 | 25.3 | 8400 | 0.1004 | 0.9646 | 0.9646 |
| 0.0763 | 25.9 | 8600 | 0.1020 | 0.9631 | 0.9631 |
| 0.0785 | 26.51 | 8800 | 0.1004 | 0.9633 | 0.9633 |
| 0.0777 | 27.11 | 9000 | 0.0999 | 0.9631 | 0.9631 |
| 0.0782 | 27.71 | 9200 | 0.0999 | 0.9638 | 0.9638 |
| 0.0731 | 28.31 | 9400 | 0.1000 | 0.9634 | 0.9634 |
| 0.0791 | 28.92 | 9600 | 0.1005 | 0.9633 | 0.9633 |
| 0.0733 | 29.52 | 9800 | 0.1013 | 0.9631 | 0.9631 |
| 0.0759 | 30.12 | 10000 | 0.1009 | 0.9631 | 0.9631 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_prom_prom_300_notata-seqsight_4096_512_15M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_notata-seqsight_4096_512_15M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T16:36:08+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | jeongmi/SOLAR_TG | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-05-03T16:36:28+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3439
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.3017 | 1.0 | 1563 | 0.3571 |
| 0.2894 | 2.0 | 3126 | 0.3516 |
| 0.2804 | 3.0 | 4689 | 0.3439 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "prajjwal1/bert-tiny", "model-index": [{"name": "results", "results": []}]} | soheill/results | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:prajjwal1/bert-tiny",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:36:31+00:00 |
text-generation | transformers | {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "llama"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct"} | predibase/Meta-Llama-3-8B-Instruct-dequantized | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"conversational",
"en",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:36:39+00:00 |
|
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="TeoGal/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
| {"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-Taxi-v3", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.50 +/- 2.76", "name": "mean_reward", "verified": false}]}]}]} | TeoGal/q-Taxi-v3 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null | 2024-05-03T16:36:48+00:00 |
automatic-speech-recognition | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | ezyyeah/murix-large-v3-1k-MERGED | null | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:37:32+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine_tuned_boolq_bert_croslo
This model is a fine-tuned version of [EMBEDDIA/crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3270
- Accuracy: 0.8333
- F1: 0.8243
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|
| 0.5567 | 4.1667 | 50 | 0.5262 | 0.7222 | 0.6523 |
| 0.1098 | 8.3333 | 100 | 0.8949 | 0.8333 | 0.8243 |
| 0.0031 | 12.5 | 150 | 1.2237 | 0.7778 | 0.7778 |
| 0.0011 | 16.6667 | 200 | 1.2641 | 0.7778 | 0.7778 |
| 0.0008 | 20.8333 | 250 | 1.2343 | 0.8333 | 0.8243 |
| 0.0007 | 25.0 | 300 | 1.2852 | 0.8333 | 0.8243 |
| 0.0005 | 29.1667 | 350 | 1.3133 | 0.8333 | 0.8243 |
| 0.0005 | 33.3333 | 400 | 1.3270 | 0.8333 | 0.8243 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "cc-by-4.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "EMBEDDIA/crosloengual-bert", "model-index": [{"name": "fine_tuned_boolq_bert_croslo", "results": []}]} | lenatr99/fine_tuned_boolq_bert_croslo | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:EMBEDDIA/crosloengual-bert",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:40:39+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | cilantro9246/lji9v56 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-03T16:41:20+00:00 |
null | null | {} | ZurabDz/albert-geo-culturax-v2 | null | [
"region:us"
] | null | 2024-05-03T16:41:29+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_tf_3-seqsight_4096_512_15M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_15M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_15M) on the [mahdibaghbanzadeh/GUE_tf_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_3) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5549
- F1 Score: 0.7132
- Accuracy: 0.716
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.634 | 0.93 | 200 | 0.5711 | 0.7179 | 0.718 |
| 0.5965 | 1.87 | 400 | 0.5680 | 0.6977 | 0.698 |
| 0.5884 | 2.8 | 600 | 0.5541 | 0.7093 | 0.712 |
| 0.5816 | 3.74 | 800 | 0.5569 | 0.7101 | 0.71 |
| 0.5801 | 4.67 | 1000 | 0.5477 | 0.7180 | 0.719 |
| 0.572 | 5.61 | 1200 | 0.5466 | 0.7244 | 0.725 |
| 0.5667 | 6.54 | 1400 | 0.5492 | 0.7268 | 0.727 |
| 0.5625 | 7.48 | 1600 | 0.5510 | 0.7201 | 0.721 |
| 0.5596 | 8.41 | 1800 | 0.5482 | 0.7151 | 0.715 |
| 0.5585 | 9.35 | 2000 | 0.5531 | 0.7097 | 0.712 |
| 0.5525 | 10.28 | 2200 | 0.5576 | 0.7141 | 0.714 |
| 0.5513 | 11.21 | 2400 | 0.5523 | 0.7223 | 0.723 |
| 0.5457 | 12.15 | 2600 | 0.5493 | 0.7081 | 0.711 |
| 0.5403 | 13.08 | 2800 | 0.5559 | 0.7157 | 0.72 |
| 0.5398 | 14.02 | 3000 | 0.5553 | 0.7123 | 0.713 |
| 0.5364 | 14.95 | 3200 | 0.5487 | 0.7163 | 0.718 |
| 0.533 | 15.89 | 3400 | 0.5521 | 0.7217 | 0.722 |
| 0.5299 | 16.82 | 3600 | 0.5576 | 0.7101 | 0.711 |
| 0.5268 | 17.76 | 3800 | 0.5548 | 0.7180 | 0.719 |
| 0.5263 | 18.69 | 4000 | 0.5564 | 0.7150 | 0.715 |
| 0.525 | 19.63 | 4200 | 0.5584 | 0.7231 | 0.725 |
| 0.5235 | 20.56 | 4400 | 0.5605 | 0.7337 | 0.734 |
| 0.5209 | 21.5 | 4600 | 0.5643 | 0.7158 | 0.716 |
| 0.5176 | 22.43 | 4800 | 0.5569 | 0.7192 | 0.721 |
| 0.517 | 23.36 | 5000 | 0.5774 | 0.7150 | 0.715 |
| 0.5139 | 24.3 | 5200 | 0.5743 | 0.7049 | 0.706 |
| 0.5084 | 25.23 | 5400 | 0.5705 | 0.7313 | 0.732 |
| 0.5141 | 26.17 | 5600 | 0.5550 | 0.7319 | 0.732 |
| 0.5062 | 27.1 | 5800 | 0.5617 | 0.7247 | 0.725 |
| 0.5043 | 28.04 | 6000 | 0.5665 | 0.7436 | 0.744 |
| 0.5019 | 28.97 | 6200 | 0.5656 | 0.7231 | 0.723 |
| 0.5054 | 29.91 | 6400 | 0.5654 | 0.7313 | 0.732 |
| 0.5034 | 30.84 | 6600 | 0.5657 | 0.7329 | 0.733 |
| 0.5017 | 31.78 | 6800 | 0.5651 | 0.7250 | 0.725 |
| 0.4963 | 32.71 | 7000 | 0.5708 | 0.7240 | 0.724 |
| 0.502 | 33.64 | 7200 | 0.5654 | 0.7260 | 0.726 |
| 0.4912 | 34.58 | 7400 | 0.5763 | 0.7117 | 0.712 |
| 0.495 | 35.51 | 7600 | 0.5726 | 0.7140 | 0.714 |
| 0.495 | 36.45 | 7800 | 0.5827 | 0.7230 | 0.723 |
| 0.4953 | 37.38 | 8000 | 0.5693 | 0.7211 | 0.721 |
| 0.4878 | 38.32 | 8200 | 0.5775 | 0.7261 | 0.726 |
| 0.4885 | 39.25 | 8400 | 0.5817 | 0.7281 | 0.728 |
| 0.4866 | 40.19 | 8600 | 0.5802 | 0.7191 | 0.719 |
| 0.4858 | 41.12 | 8800 | 0.5821 | 0.7221 | 0.722 |
| 0.4915 | 42.06 | 9000 | 0.5763 | 0.7201 | 0.72 |
| 0.4871 | 42.99 | 9200 | 0.5790 | 0.7130 | 0.713 |
| 0.4845 | 43.93 | 9400 | 0.5827 | 0.7120 | 0.712 |
| 0.4834 | 44.86 | 9600 | 0.5823 | 0.7141 | 0.714 |
| 0.4872 | 45.79 | 9800 | 0.5791 | 0.7201 | 0.72 |
| 0.4824 | 46.73 | 10000 | 0.5811 | 0.7141 | 0.714 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_15M", "model-index": [{"name": "GUE_tf_3-seqsight_4096_512_15M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_tf_3-seqsight_4096_512_15M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_15M",
"region:us"
] | null | 2024-05-03T16:41:31+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-Instruct-v0.2-finetune-SWE_90_10_EN
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0346
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0356 | 0.9995 | 1855 | 1.4789 |
| 0.9073 | 1.9989 | 3710 | 1.4895 |
| 0.349 | 2.9984 | 5565 | 1.6255 |
| 0.2672 | 3.9978 | 7420 | 1.8033 |
| 0.341 | 4.9973 | 9275 | 2.0346 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "model-index": [{"name": "Mistral-7B-Instruct-v0.2-finetune-SWE_90_10_EN", "results": []}]} | JuanjoLopez19/Mistral-7B-Instruct-v0.2-finetune-SWE_90_10_EN | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-05-03T16:43:05+00:00 |
null | null | {} | mdiamore/mistral_instruct_coni_form_base_retrain | null | [
"region:us"
] | null | 2024-05-03T16:43:34+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine_tuned_cb_croslo
This model is a fine-tuned version of [EMBEDDIA/crosloengual-bert](https://huggingface.co/EMBEDDIA/crosloengual-bert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3249
- Accuracy: 0.6818
- F1: 0.6390
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|
| 0.7054 | 3.5714 | 50 | 1.4024 | 0.3182 | 0.1536 |
| 0.3117 | 7.1429 | 100 | 1.0030 | 0.6818 | 0.6383 |
| 0.0286 | 10.7143 | 150 | 1.0108 | 0.7273 | 0.6791 |
| 0.0038 | 14.2857 | 200 | 1.1886 | 0.6818 | 0.6390 |
| 0.0025 | 17.8571 | 250 | 1.2342 | 0.6818 | 0.6390 |
| 0.0019 | 21.4286 | 300 | 1.2576 | 0.7273 | 0.6791 |
| 0.0015 | 25.0 | 350 | 1.2963 | 0.6818 | 0.6390 |
| 0.0015 | 28.5714 | 400 | 1.3249 | 0.6818 | 0.6390 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "cc-by-4.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "EMBEDDIA/crosloengual-bert", "model-index": [{"name": "fine_tuned_cb_croslo", "results": []}]} | lenatr99/fine_tuned_cb_croslo | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:EMBEDDIA/crosloengual-bert",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-03T16:44:34+00:00 |
Subsets and Splits