modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-16 06:27:39
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 427
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-16 06:26:18
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
coleperg/nli-roberta-base-finetuned-for-amazon-review-ratings | coleperg | "2023-03-28T22:19:30Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-03-28T22:05:48Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: nli-roberta-base-finetuned-for-amazon-review-ratings
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
config: en
split: validation
args: en
metrics:
- name: Accuracy
type: accuracy
value: 0.548
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nli-roberta-base-finetuned-for-amazon-review-ratings
This model is a fine-tuned version of [cross-encoder/nli-roberta-base](https://huggingface.co/cross-encoder/nli-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0089
- Meanabsoluteerror: 0.535
- Accuracy: 0.548
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Meanabsoluteerror | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------:|
| 1.1095 | 1.0 | 313 | 1.0089 | 0.535 | 0.548 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
jonatasgrosman/exp_w2v2t_de_unispeech-sat_s75 | jonatasgrosman | "2022-07-10T12:02:58Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"unispeech-sat",
"automatic-speech-recognition",
"de",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-07-10T12:02:15Z" | ---
language:
- de
license: apache-2.0
tags:
- automatic-speech-recognition
- de
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_de_unispeech-sat_s75
Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
vuiseng9/bert-base-uncased-squad | vuiseng9 | "2022-01-08T18:08:11Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
] | question-answering | "2022-03-02T23:29:05Z" | This model is developed with transformers v4.10.3.
# Train
```bash
#!/usr/bin/env bash
export CUDA_VISIBLE_DEVICES=0
OUTDIR=bert-base-uncased-squad
WORKDIR=transformers/examples/pytorch/question-answering
cd $WORKDIR
nohup python run_qa.py \
--model_name_or_path bert-base-uncased \
--dataset_name squad \
--do_eval \
--do_train \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 16 \
--doc_stride 128 \
--max_seq_length 384 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--eval_steps 250 \
--save_steps 2500 \
--logging_steps 1 \
--overwrite_output_dir \
--output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log &
```
# Eval
```bash
export CUDA_VISIBLE_DEVICES=0
OUTDIR=eval-bert-base-uncased-squad
WORKDIR=transformers/examples/pytorch/question-answering
cd $WORKDIR
nohup python run_qa.py \
--model_name_or_path vuiseng9/bert-base-uncased-squad \
--dataset_name squad \
--do_eval \
--per_device_eval_batch_size 16 \
--max_seq_length 384 \
--doc_stride 128 \
--overwrite_output_dir \
--output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log &
```
|
mradermacher/MN-12B-Mimicore-WhiteSnake-v2-Experiment-3-GGUF | mradermacher | "2025-01-29T22:42:06Z" | 280 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-29T22:14:48Z" | ---
base_model: DoppelReflEx/MN-12B-Mimicore-WhiteSnake-v2-Experiment-3
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/DoppelReflEx/MN-12B-Mimicore-WhiteSnake-v2-Experiment-3
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-WhiteSnake-v2-Experiment-3-GGUF/resolve/main/MN-12B-Mimicore-WhiteSnake-v2-Experiment-3.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-WhiteSnake-v2-Experiment-3-GGUF/resolve/main/MN-12B-Mimicore-WhiteSnake-v2-Experiment-3.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-WhiteSnake-v2-Experiment-3-GGUF/resolve/main/MN-12B-Mimicore-WhiteSnake-v2-Experiment-3.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-WhiteSnake-v2-Experiment-3-GGUF/resolve/main/MN-12B-Mimicore-WhiteSnake-v2-Experiment-3.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-WhiteSnake-v2-Experiment-3-GGUF/resolve/main/MN-12B-Mimicore-WhiteSnake-v2-Experiment-3.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-WhiteSnake-v2-Experiment-3-GGUF/resolve/main/MN-12B-Mimicore-WhiteSnake-v2-Experiment-3.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-WhiteSnake-v2-Experiment-3-GGUF/resolve/main/MN-12B-Mimicore-WhiteSnake-v2-Experiment-3.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-WhiteSnake-v2-Experiment-3-GGUF/resolve/main/MN-12B-Mimicore-WhiteSnake-v2-Experiment-3.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-WhiteSnake-v2-Experiment-3-GGUF/resolve/main/MN-12B-Mimicore-WhiteSnake-v2-Experiment-3.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-WhiteSnake-v2-Experiment-3-GGUF/resolve/main/MN-12B-Mimicore-WhiteSnake-v2-Experiment-3.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-WhiteSnake-v2-Experiment-3-GGUF/resolve/main/MN-12B-Mimicore-WhiteSnake-v2-Experiment-3.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
hgnoi/9y6Y8saIIVIMmkag | hgnoi | "2024-05-26T23:36:17Z" | 76 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-26T23:33:59Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dbands/llama-3-8b-instruct_databricks-dolly-15k_4bit | dbands | "2024-04-26T08:13:00Z" | 75 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-04-26T08:09:23Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** dbands
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
WilliamADSP/Reinforce-CartPole1 | WilliamADSP | "2023-04-27T16:42:20Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-04-27T16:42:10Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
SEBIS/legal_t5_small_trans_sv_de_small_finetuned | SEBIS | "2021-06-23T10:07:30Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"translation Swedish Deustch model",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-03-02T23:29:04Z" |
---
language: Swedish Deustch
tags:
- translation Swedish Deustch model
datasets:
- dcep europarl jrc-acquis
widget:
- text: "G. Mäns och kvinnors förmåga att delta på lika villkor i det politiska livet och i beslutsfattandet är en grundläggande förutsättning för en verklig demokrati."
---
# legal_t5_small_trans_sv_de_small_finetuned model
Model on translating legal text from Swedish to Deustch. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
## Model description
legal_t5_small_trans_sv_de_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_sv_de_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for translation of legal texts from Swedish to Deustch.
### How to use
Here is how to use this model to translate legal text from Swedish to Deustch in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_sv_de_small_finetuned"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_sv_de", do_lower_case=False,
skip_special_tokens=True),
device=0
)
sv_text = "G. Mäns och kvinnors förmåga att delta på lika villkor i det politiska livet och i beslutsfattandet är en grundläggande förutsättning för en verklig demokrati."
pipeline([sv_text], max_length=512)
```
## Training data
The legal_t5_small_trans_sv_de_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
## Evaluation results
When the model is used for translation test dataset, achieves the following results:
Test results :
| Model | BLEU score |
|:-----:|:-----:|
| legal_t5_small_trans_sv_de_small_finetuned | 40.240|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
|
anas-awadalla/spanbert-base-cased-few-shot-k-512-finetuned-squad-seed-2 | anas-awadalla | "2022-02-26T07:07:11Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"endpoints_compatible",
"region:us"
] | question-answering | "2022-03-02T23:29:05Z" | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: spanbert-base-cased-few-shot-k-512-finetuned-squad-seed-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-few-shot-k-512-finetuned-squad-seed-2
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
MNgaix/lora_model | MNgaix | "2025-03-30T13:58:04Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:mistralai/Codestral-22B-v0.1",
"base_model:finetune:mistralai/Codestral-22B-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-03-25T15:54:13Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
numerouno00/7201ccdf-8623-43d8-878a-316d599402ae | numerouno00 | "2025-04-11T07:10:12Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-11T06:56:21Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
MrRobotoAI/132 | MrRobotoAI | "2025-04-08T15:22:12Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:MrRobotoAI/100",
"base_model:merge:MrRobotoAI/100",
"base_model:MrRobotoAI/101",
"base_model:merge:MrRobotoAI/101",
"base_model:MrRobotoAI/102",
"base_model:merge:MrRobotoAI/102",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-08T15:18:47Z" | ---
base_model:
- MrRobotoAI/100
- MrRobotoAI/102
- MrRobotoAI/101
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [MrRobotoAI/102](https://huggingface.co/MrRobotoAI/102) as a base.
### Models Merged
The following models were included in the merge:
* [MrRobotoAI/100](https://huggingface.co/MrRobotoAI/100)
* [MrRobotoAI/101](https://huggingface.co/MrRobotoAI/101)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: MrRobotoAI/100
parameters:
density: 0.3333
weight: 0.9
- model: MrRobotoAI/101
parameters:
density: 0.3333
weight: 0.9
- model: MrRobotoAI/102
parameters:
density: 0.3333
weight: 0.9
merge_method: ties
base_model: MrRobotoAI/102
dtype: float16
```
|
Pearush/deepseek_half_nexp_2 | Pearush | "2025-02-06T17:40:58Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"deepseek_v3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | text-generation | "2025-02-06T08:07:55Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lukehg/2025-04-15-09-18-59-OFJ | lukehg | "2025-04-15T09:19:14Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-15T09:19:14Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
redmojo7/new_model_id | redmojo7 | "2024-05-13T01:06:25Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-05-13T01:06:12Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** redmojo7
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
bowilleatyou/1ced6261-e0ba-4b49-8c1f-d8ddffcf193e | bowilleatyou | "2025-02-22T22:39:19Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-02-22T20:58:15Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
albertus-sussex/veriscrape-sbert-auto-reference_4_to_verify_6-fold-4 | albertus-sussex | "2025-04-01T02:03:01Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"new",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:7290",
"loss:TripletLoss",
"custom_code",
"arxiv:1908.10084",
"arxiv:1703.07737",
"base_model:Alibaba-NLP/gte-base-en-v1.5",
"base_model:finetune:Alibaba-NLP/gte-base-en-v1.5",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2025-04-01T02:02:28Z" | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:7290
- loss:TripletLoss
base_model: Alibaba-NLP/gte-base-en-v1.5
widget:
- source_sentence: 'Fuel consumption: city= 20 (mpg); highway= 27 (mpg); combined=
23 (mpg); vehicle range: 460 miles'
sentences:
- 'Engine: 5.3L V 8 overhead valve ( 9.9 :1 compression ratio ; two valves per cylinder)'
- fuel_economy
- engine
- 11 mpg
- source_sentence: $18,895.00
sentences:
- engine
- $ 51,900
- price
- 'Engine: 2.0L Duratec in-linefour-cylinder DOHC and four valves per cylinder'
- source_sentence: 2011 Ford E-350 Super Duty Commercial Extended Cargo
sentences:
- 11 mpg
- fuel_economy
- 2011 Infiniti G Highlights
- model
- source_sentence: 23 MPG city / 30 MPG highway
sentences:
- fuel_economy
- 2011 Chevrolet Suburban 1500 LT Sport Utility
- model
- 21 mpg City / 28 mpg Hwy
- source_sentence: 21/26
sentences:
- fuel_economy
- 2010 Hyundai Genesis Grand Touring Coupe
- model
- '-'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy
- silhouette_cosine
- silhouette_euclidean
model-index:
- name: SentenceTransformer based on Alibaba-NLP/gte-base-en-v1.5
results:
- task:
type: triplet
name: Triplet
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy
value: 1.0
name: Cosine Accuracy
- type: cosine_accuracy
value: 1.0
name: Cosine Accuracy
- task:
type: silhouette
name: Silhouette
dataset:
name: Unknown
type: unknown
metrics:
- type: silhouette_cosine
value: 0.9661810994148254
name: Silhouette Cosine
- type: silhouette_euclidean
value: 0.8312028646469116
name: Silhouette Euclidean
- type: silhouette_cosine
value: 0.9656964540481567
name: Silhouette Cosine
- type: silhouette_euclidean
value: 0.8298556208610535
name: Silhouette Euclidean
---
# SentenceTransformer based on Alibaba-NLP/gte-base-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Alibaba-NLP/gte-base-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Alibaba-NLP/gte-base-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-base-en-v1.5) <!-- at revision a829fd0e060bb84554da0dfd354d0de0f7712b7f -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: NewModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("albertus-sussex/veriscrape-sbert-auto-reference_4_to_verify_6-fold-4")
# Run inference
sentences = [
'21/26',
'-',
'2010 Hyundai Genesis Grand Touring Coupe',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:--------|
| **cosine_accuracy** | **1.0** |
#### Silhouette
* Evaluated with <code>veriscrape.training.SilhouetteEvaluator</code>
| Metric | Value |
|:----------------------|:-----------|
| **silhouette_cosine** | **0.9662** |
| silhouette_euclidean | 0.8312 |
#### Triplet
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:--------|
| **cosine_accuracy** | **1.0** |
#### Silhouette
* Evaluated with <code>veriscrape.training.SilhouetteEvaluator</code>
| Metric | Value |
|:----------------------|:-----------|
| **silhouette_cosine** | **0.9657** |
| silhouette_euclidean | 0.8299 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 7,290 training samples
* Columns: <code>anchor</code>, <code>positive</code>, <code>negative</code>, <code>pos_attr_name</code>, and <code>neg_attr_name</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative | pos_attr_name | neg_attr_name |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|
| type | string | string | string | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 11.17 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 11.14 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 11.88 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 3.55 tokens</li><li>max: 5 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 3.47 tokens</li><li>max: 5 tokens</li></ul> |
* Samples:
| anchor | positive | negative | pos_attr_name | neg_attr_name |
|:-----------------------------------------------------|:-------------------------------------------------|:--------------------------------------|:--------------------------|:--------------------------|
| <code>2010 Dodge Ram 1500 Crew Cab Highlights</code> | <code>2011 GMC Canyon SLT Crew Cab Pickup</code> | <code>$35,300</code> | <code>model</code> | <code>price</code> |
| <code>17 mpg</code> | <code>18 mpg</code> | <code>$17,450</code> | <code>fuel_economy</code> | <code>price</code> |
| <code>2011 FORD EXPEDITION EL TECH SPECS</code> | <code>2011 Ford Escape XLS Sport Utility</code> | <code>11 MPG City / 15 MPG Hwy</code> | <code>model</code> | <code>fuel_economy</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 5
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 810 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, <code>negative</code>, <code>pos_attr_name</code>, and <code>neg_attr_name</code>
* Approximate statistics based on the first 810 samples:
| | anchor | positive | negative | pos_attr_name | neg_attr_name |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|
| type | string | string | string | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 11.12 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 11.23 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 11.38 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 3.65 tokens</li><li>max: 5 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 3.46 tokens</li><li>max: 5 tokens</li></ul> |
* Samples:
| anchor | positive | negative | pos_attr_name | neg_attr_name |
|:--------------------------------------------------------------|:-----------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------|:--------------------------|:--------------------------|
| <code>20/27</code> | <code>MPG City / MPG Hwy</code> | <code>5-Cyl, 2.5 Liter</code> | <code>fuel_economy</code> | <code>engine</code> |
| <code>Vortec 6.0L Variable Valve Timing V8 SFI</code> | <code>Engine: 3.7L VTEC V-6 OHC with variable valve timing and four valves per cylinder</code> | <code>Fuel consumption: city= 12 (mpg); highway= 19 (mpg); combined= 15 (mpg); vehicle range: 261 miles</code> | <code>engine</code> | <code>fuel_economy</code> |
| <code>2011 Subaru Forester 2.5XT Touring Sport Utility</code> | <code>2011 Ford Transit Connect XL (100A) Cargo</code> | <code>11 mpg</code> | <code>model</code> | <code>fuel_economy</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 5
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `num_train_epochs`: 5
- `warmup_ratio`: 0.1
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | cosine_accuracy | silhouette_cosine |
|:-----:|:----:|:-------------:|:---------------:|:---------------:|:-----------------:|
| -1 | -1 | - | - | 0.7543 | 0.3544 |
| 1.0 | 57 | 0.3359 | 0.0 | 1.0 | 0.9656 |
| 2.0 | 114 | 0.0 | 0.0 | 1.0 | 0.9662 |
| 3.0 | 171 | 0.0 | 0.0 | 1.0 | 0.9662 |
| 4.0 | 228 | 0.0 | 0.0 | 1.0 | 0.9662 |
| 5.0 | 285 | 0.0 | 0.0 | 1.0 | 0.9662 |
| -1 | -1 | - | - | 1.0 | 0.9657 |
### Framework Versions
- Python: 3.10.16
- Sentence Transformers: 4.0.1
- Transformers: 4.45.2
- PyTorch: 2.5.1+cu124
- Accelerate: 1.5.2
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
RichardErkhov/qgallouedec_-_tiny-MistralForCausalLM-0.1-mlx | RichardErkhov | "2025-02-28T21:45:41Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-02-28T21:45:10Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
tiny-MistralForCausalLM-0.1 - MLX
- Model creator: https://huggingface.co/qgallouedec/
- Original model: https://huggingface.co/qgallouedec/tiny-MistralForCausalLM-0.1/
# Quick start for LLMs
Install `mlx-lm`:
```
pip install mlx-lm
```
You can use `mlx-lm` from the command line. For example:
```
mlx_lm.generate --model qgallouedec_-_tiny-MistralForCausalLM-0.1-mlx --prompt "hello"
```
This will download a model from the Hugging Face Hub and generate
text using the given prompt.
To chat with an LLM use:
```bash
mlx_lm.chat
```
This will give you a chat REPL that you can use to interact with the LLM. The
chat context is preserved during the lifetime of the REPL.
For a full list of options run `--help` on the command of your interest, for example:
```
mlx_lm.chat --help
```
Original model description:
---
library_name: transformers
tags:
- trl
---
# Tiny MistralForCausalLM
This is a minimal model built for unit tests in the [TRL](https://github.com/huggingface/trl) library.
|
irishprancer/f3868368-4ed2-4e70-9fad-73bfd308451c | irishprancer | "2025-02-23T01:14:13Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-02-22T20:06:47Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ThilinaGunathilaka/fine-tune-sinhala-bert-v3 | ThilinaGunathilaka | "2025-03-19T04:40:03Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"fill-mask",
"sinhala",
"masked-language-model",
"sinhala-news",
"si",
"base_model:Ransaka/sinhala-bert-medium-v2",
"base_model:finetune:Ransaka/sinhala-bert-medium-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2025-03-19T04:01:44Z" | ---
library_name: transformers
tags:
- sinhala
- bert
- masked-language-model
- sinhala-news
license: apache-2.0
language:
- si
metrics:
- perplexity
base_model:
- Ransaka/sinhala-bert-medium-v2
---
# Model Card for Sinhala-BERT Fine-Tuned MLM
This model is a fine-tuned version of `Ransaka/sinhala-bert-medium-v2` on the Sinhala News Corpus dataset for Masked Language Modeling (MLM).
## Model Details
### Model Description
This Sinhala-BERT model was fine-tuned specifically for the Sinhala language to improve its capabilities in Masked Language Modeling. It leverages the architecture of BERT and was further optimized on the Sinhala News Corpus dataset, aiming to achieve better contextual language understanding for Sinhala text.
- **Developed by:** [Thilina Gunathilaka]
- **Model type:** Transformer-based Language Model (BERT)
- **Language(s) (NLP):** Sinhala (si)
- **License:** Apache-2.0
- **Finetuned from model [optional]:** [Ransaka/sinhala-bert-medium-v2](https://huggingface.co/Ransaka/sinhala-bert-medium-v2)
### Model Sources [optional]
- **Repository:** [Your Hugging Face Repository URL]
- **Dataset:** [TestData-CrossLingualDocumentSimilarityMeasurement](https://github.com/UdeshAthukorala/TestData-CrossLingualDocumentSimilarityMeasurement)
## Uses
### Direct Use
This model can directly be used for:
- Masked Language Modeling (filling missing words or predicting masked tokens)
- Feature extraction for Sinhala text
### Downstream Use [optional]
This model can be fine-tuned further for various downstream NLP tasks in Sinhala, such as:
- Text Classification
- Named Entity Recognition (NER)
- Sentiment Analysis
### Out-of-Scope Use
- This model is specifically trained for Sinhala. Performance on other languages is likely poor.
- Not suitable for tasks unrelated to textual data.
## Bias, Risks, and Limitations
Like any language model, this model may inherit biases from its training data. It's recommended to assess model predictions for biases before deployment in critical applications.
### Recommendations
- Evaluate model biases before deployment.
- Ensure fair and transparent use of this model in sensitive contexts.
## How to Get Started with the Model
Use the code below to get started with this model:
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("your-username/your-model-name")
model = AutoModelForMaskedLM.from_pretrained("your-username/your-model-name")
```
## Training Details
### Training Data
The model was trained on the Sinhala News Corpus dataset, comprising Sinhala news articles.
### Training Procedure
- **Tokenization**: Sinhala-specific tokenization and text normalization
- **Max Sequence Length**: 128
- **MLM Probability**: 15%
#### Training Hyperparameters
- **Epochs:** 25
- **Batch Size:** 2 (Gradient accumulation steps: 2)
- **Optimizer:** AdamW
- **Learning Rate:** 3e-5
- **Mixed Precision:** FP32
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
Sinhala News Corpus dataset test split was used.
#### Metrics
- **Perplexity:** Used to measure language modeling capability.
- **Loss (Cross-Entropy):** Lower is better.
### Results
The final evaluation metrics obtained:
| Metric | Value |
|---------------|-------|
| Perplexity | [15.95] |
| Validation Loss | [2.77] |
#### Summary
The model achieved strong MLM results on the Sinhala News Corpus dataset, demonstrating improved language understanding.
## Environmental Impact
Carbon emissions were not explicitly tracked. For estimation, refer to [Machine Learning Impact calculator](https://mlco2.github.io/impact).
- **Hardware Type:** GPU (Tesla T4)
- **Hours used:** [Approximate training hours]
- **Cloud Provider:** Kaggle
- **Compute Region:** [Region used, e.g., us-central]
- **Carbon Emitted:** [Estimated CO2 emissions]
## Technical Specifications
### Model Architecture and Objective
Transformer-based BERT architecture optimized for Masked Language Modeling tasks.
### Compute Infrastructure
#### Hardware
- NVIDIA Tesla T4 GPU
#### Software
- Python 3.10
- Transformers library by Hugging Face
- PyTorch
## Citation [optional]
If you use this model, please cite it as:
```bibtex
@misc{yourusername2024sinhalabert,
author = {Your Name},
title = {Sinhala-BERT Fine-Tuned on Sinhala News Corpus},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face Model Hub},
howpublished = {\url{https://huggingface.co/your-username/your-model-name}}
}
```
## Model Card Authors
- [Thilina Gunathilaka] |
ih9511/gemma2-2b_medical_translation_en_ko | ih9511 | "2025-02-19T15:40:10Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-19T15:32:30Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
auxyus/be580158-4613-45ca-831c-b6c1fe0ff9de | auxyus | "2025-01-25T03:22:59Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:echarlaix/tiny-random-mistral",
"base_model:adapter:echarlaix/tiny-random-mistral",
"license:apache-2.0",
"region:us"
] | null | "2025-01-25T03:21:02Z" | ---
library_name: peft
license: apache-2.0
base_model: echarlaix/tiny-random-mistral
tags:
- axolotl
- generated_from_trainer
model-index:
- name: be580158-4613-45ca-831c-b6c1fe0ff9de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: echarlaix/tiny-random-mistral
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2fbd62eeb0e4156f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2fbd62eeb0e4156f_train_data.json
type:
field_instruction: q
field_output: a
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: auxyus/be580158-4613-45ca-831c-b6c1fe0ff9de
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/2fbd62eeb0e4156f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 991a9506-1420-4238-891e-e06832d29892
wandb_project: Gradients-On-Two
wandb_run: your_name
wandb_runid: 991a9506-1420-4238-891e-e06832d29892
warmup_steps: 10
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# be580158-4613-45ca-831c-b6c1fe0ff9de
This model is a fine-tuned version of [echarlaix/tiny-random-mistral](https://huggingface.co/echarlaix/tiny-random-mistral) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.3403
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | 10.3753 |
| 41.4824 | 0.0030 | 9 | 10.3742 |
| 41.4896 | 0.0061 | 18 | 10.3713 |
| 41.4673 | 0.0091 | 27 | 10.3676 |
| 41.459 | 0.0121 | 36 | 10.3629 |
| 41.4357 | 0.0152 | 45 | 10.3574 |
| 41.4135 | 0.0182 | 54 | 10.3516 |
| 41.3975 | 0.0213 | 63 | 10.3466 |
| 41.3763 | 0.0243 | 72 | 10.3432 |
| 41.3687 | 0.0273 | 81 | 10.3412 |
| 41.3772 | 0.0304 | 90 | 10.3404 |
| 41.362 | 0.0334 | 99 | 10.3403 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Trismal-NeurAmoclion-7B-Base-Ties-GGUF | mradermacher | "2025-03-31T12:59:39Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"JoPmt/Trismal-HyperAmocles-7B-Base-Ties",
"Locutusque/NeuralHyperion-2.0-Mistral-7B",
"en",
"base_model:JoPmt/Trismal-NeurAmoclion-7B-Base-Ties",
"base_model:quantized:JoPmt/Trismal-NeurAmoclion-7B-Base-Ties",
"endpoints_compatible",
"region:us"
] | null | "2025-03-31T12:08:45Z" | ---
base_model: JoPmt/Trismal-NeurAmoclion-7B-Base-Ties
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- JoPmt/Trismal-HyperAmocles-7B-Base-Ties
- Locutusque/NeuralHyperion-2.0-Mistral-7B
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/JoPmt/Trismal-NeurAmoclion-7B-Base-Ties
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Trismal-NeurAmoclion-7B-Base-Ties-GGUF/resolve/main/Trismal-NeurAmoclion-7B-Base-Ties.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Trismal-NeurAmoclion-7B-Base-Ties-GGUF/resolve/main/Trismal-NeurAmoclion-7B-Base-Ties.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Trismal-NeurAmoclion-7B-Base-Ties-GGUF/resolve/main/Trismal-NeurAmoclion-7B-Base-Ties.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Trismal-NeurAmoclion-7B-Base-Ties-GGUF/resolve/main/Trismal-NeurAmoclion-7B-Base-Ties.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Trismal-NeurAmoclion-7B-Base-Ties-GGUF/resolve/main/Trismal-NeurAmoclion-7B-Base-Ties.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Trismal-NeurAmoclion-7B-Base-Ties-GGUF/resolve/main/Trismal-NeurAmoclion-7B-Base-Ties.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Trismal-NeurAmoclion-7B-Base-Ties-GGUF/resolve/main/Trismal-NeurAmoclion-7B-Base-Ties.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Trismal-NeurAmoclion-7B-Base-Ties-GGUF/resolve/main/Trismal-NeurAmoclion-7B-Base-Ties.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Trismal-NeurAmoclion-7B-Base-Ties-GGUF/resolve/main/Trismal-NeurAmoclion-7B-Base-Ties.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Trismal-NeurAmoclion-7B-Base-Ties-GGUF/resolve/main/Trismal-NeurAmoclion-7B-Base-Ties.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Trismal-NeurAmoclion-7B-Base-Ties-GGUF/resolve/main/Trismal-NeurAmoclion-7B-Base-Ties.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Trismal-NeurAmoclion-7B-Base-Ties-GGUF/resolve/main/Trismal-NeurAmoclion-7B-Base-Ties.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
jslowik/distilbert-base-uncased-finetuned-emotion | jslowik | "2022-07-14T15:05:25Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-07-14T15:01:13Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9265
- name: F1
type: f1
value: 0.9262423473736914
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2156
- Accuracy: 0.9265
- F1: 0.9262
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.814 | 1.0 | 250 | 0.3075 | 0.907 | 0.9048 |
| 0.2481 | 2.0 | 500 | 0.2156 | 0.9265 | 0.9262 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
damgomz/fp_bs4_lr1e4_x8 | damgomz | "2024-07-10T01:56:47Z" | 107 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"fill-mask",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2024-07-06T10:29:48Z" | ---
language: en
tags:
- fill-mask
kwargs:
timestamp: '2024-07-10T03:56:44'
project_name: fp_bs4_lr1e4_x8_emissions_tracker
run_id: f31826d8-c584-437c-a294-0a7f1c2ae386
duration: 188231.78425216675
emissions: 0.1416076810173383
emissions_rate: 7.52304833001169e-07
cpu_power: 42.5
gpu_power: 0.0
ram_power: 15.0
cpu_energy: 2.2221763482497714
gpu_energy: 0
ram_energy: 0.7842923258195307
energy_consumed: 3.0064686740693056
country_name: Switzerland
country_iso_code: CHE
region: .nan
cloud_provider: .nan
cloud_region: .nan
os: Linux-5.14.0-70.30.1.el9_0.x86_64-x86_64-with-glibc2.34
python_version: 3.10.4
codecarbon_version: 2.3.4
cpu_count: 6
cpu_model: Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz
gpu_count: .nan
gpu_model: .nan
longitude: .nan
latitude: .nan
ram_total_size: 40
tracking_mode: machine
on_cloud: N
pue: 1.0
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 188231.78425216675 |
| Emissions (Co2eq in kg) | 0.1416076810173383 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 15.0 |
| CPU energy (kWh) | 2.2221763482497714 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.7842923258195307 |
| Consumed energy (kWh) | 3.0064686740693056 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 6 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.36234618468542096 |
| Emissions (Co2eq in kg) | 0.0737241154987653 |
## Note
5 juillet 2024 !
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | fp_bs4_lr1e4_x8 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 0.0001 |
| batch_size | 4 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 165344 |
## Training and Testing steps
Epoch | Train Loss | Test Loss
---|---|---
| 0.0 | 13.577518 | 16.454822 |
| 0.5 | 4.348466 | 8.290026 |
| 1.0 | 7.080169 | 7.084954 |
| 1.5 | 7.062799 | 7.041789 |
| 2.0 | 7.035213 | 7.043707 |
| 2.5 | 7.042095 | 7.035433 |
| 3.0 | 7.019162 | 7.022437 |
| 3.5 | 7.003615 | 7.009589 |
| 4.0 | 6.991288 | 6.997943 |
| 4.5 | 6.990645 | 6.999312 |
| 5.0 | 6.996278 | 6.999325 |
| 5.5 | 6.988248 | 6.996825 |
| 6.0 | 6.977846 | 6.984195 |
|
mergekit-community/mergekit-passthrough-bwvduuf | mergekit-community | "2025-04-05T20:27:47Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:mergekit-community/mergekit-passthrough-gujurtn",
"base_model:merge:mergekit-community/mergekit-passthrough-gujurtn",
"base_model:mergekit-community/mergekit-slerp-wzipxtu",
"base_model:merge:mergekit-community/mergekit-slerp-wzipxtu",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-05T20:20:31Z" | ---
base_model:
- mergekit-community/mergekit-slerp-wzipxtu
- mergekit-community/mergekit-passthrough-gujurtn
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the Passthrough merge method.
### Models Merged
The following models were included in the merge:
* [mergekit-community/mergekit-slerp-wzipxtu](https://huggingface.co/mergekit-community/mergekit-slerp-wzipxtu)
* [mergekit-community/mergekit-passthrough-gujurtn](https://huggingface.co/mergekit-community/mergekit-passthrough-gujurtn)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: passthrough
dtype: bfloat16
slices:
- sources:
- model: mergekit-community/mergekit-passthrough-gujurtn
layer_range: [0,40]
- sources:
- model: mergekit-community/mergekit-slerp-wzipxtu
layer_range: [40,71]
```
|
fluidapp/meta-llama-3-8b-instruct-gguf | fluidapp | "2024-07-02T20:51:28Z" | 6 | 0 | null | [
"gguf",
"license:llama3",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-05-20T22:24:28Z" | ---
license: llama3
---
Fork of https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF (Using llama.cpp commit ffe6665 for quantization.) |
SHENMU007/neunit_BASE_V9.5.11 | SHENMU007 | "2023-09-11T08:56:39Z" | 76 | 0 | transformers | [
"transformers",
"pytorch",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | "2023-09-11T07:51:35Z" | ---
language:
- zh
license: mit
base_model: microsoft/speecht5_tts
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
lesso05/4179621a-24e4-41cf-830f-11a82df0ea04 | lesso05 | "2025-02-22T12:08:57Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2b-it",
"base_model:adapter:unsloth/gemma-2b-it",
"license:apache-2.0",
"region:us"
] | null | "2025-02-22T10:18:06Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/gemma-2b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4179621a-24e4-41cf-830f-11a82df0ea04
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
auto_find_batch_size: true
base_model: unsloth/gemma-2b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 34dd743c8a6d550a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/34dd743c8a6d550a_train_data.json
type:
field_input: intent
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 50
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: true
hub_model_id: lesso05/4179621a-24e4-41cf-830f-11a82df0ea04
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000205
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/34dd743c8a6d550a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
seed: 50
sequence_len: 512
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3c361f26-a666-42f7-9bd4-8137b9a5a8af
wandb_project: 05a
wandb_run: your_name
wandb_runid: 3c361f26-a666-42f7-9bd4-8137b9a5a8af
warmup_steps: 50
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 4179621a-24e4-41cf-830f-11a82df0ea04
This model is a fine-tuned version of [unsloth/gemma-2b-it](https://huggingface.co/unsloth/gemma-2b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3330
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000205
- train_batch_size: 4
- eval_batch_size: 4
- seed: 50
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 0.9666 |
| 0.4139 | 0.0017 | 50 | 0.4584 |
| 0.3428 | 0.0034 | 100 | 0.4148 |
| 0.2975 | 0.0051 | 150 | 0.3911 |
| 0.2813 | 0.0067 | 200 | 0.3777 |
| 0.2527 | 0.0084 | 250 | 0.3653 |
| 0.2898 | 0.0101 | 300 | 0.3523 |
| 0.2607 | 0.0118 | 350 | 0.3426 |
| 0.2339 | 0.0135 | 400 | 0.3364 |
| 0.2421 | 0.0152 | 450 | 0.3335 |
| 0.2419 | 0.0169 | 500 | 0.3330 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso07/53819473-1981-4fb2-97c6-6fe42f70715c | lesso07 | "2025-02-21T22:24:29Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Llama-2-7b-128k",
"base_model:adapter:NousResearch/Yarn-Llama-2-7b-128k",
"region:us"
] | null | "2025-02-21T19:05:38Z" | ---
library_name: peft
base_model: NousResearch/Yarn-Llama-2-7b-128k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 53819473-1981-4fb2-97c6-6fe42f70715c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
auto_find_batch_size: true
base_model: NousResearch/Yarn-Llama-2-7b-128k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 443df9cfc3ae5ff8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/443df9cfc3ae5ff8_train_data.json
type:
field_input: knowledge
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 50
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: true
hub_model_id: lesso07/53819473-1981-4fb2-97c6-6fe42f70715c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000207
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/443df9cfc3ae5ff8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
seed: 70
sequence_len: 512
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 67c15ec6-50fc-448d-9e7e-bda80cc29799
wandb_project: 07a
wandb_run: your_name
wandb_runid: 67c15ec6-50fc-448d-9e7e-bda80cc29799
warmup_steps: 50
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 53819473-1981-4fb2-97c6-6fe42f70715c
This model is a fine-tuned version of [NousResearch/Yarn-Llama-2-7b-128k](https://huggingface.co/NousResearch/Yarn-Llama-2-7b-128k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4619
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000207
- train_batch_size: 4
- eval_batch_size: 4
- seed: 70
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 0.8609 |
| 0.8568 | 0.0017 | 50 | 0.5684 |
| 0.7281 | 0.0034 | 100 | 0.5408 |
| 0.7754 | 0.0051 | 150 | 0.5158 |
| 0.7684 | 0.0067 | 200 | 0.5083 |
| 0.7892 | 0.0084 | 250 | 0.4876 |
| 0.7067 | 0.0101 | 300 | 0.4821 |
| 0.7076 | 0.0118 | 350 | 0.4716 |
| 0.7213 | 0.0135 | 400 | 0.4655 |
| 0.673 | 0.0152 | 450 | 0.4624 |
| 0.6611 | 0.0169 | 500 | 0.4619 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
justinwangx/vicuna-robust2-sft-lora | justinwangx | "2023-12-20T05:33:07Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"generated_from_trainer",
"region:us"
] | null | "2023-12-20T05:31:41Z" | ---
tags:
- generated_from_trainer
model-index:
- name: vicuna-robust2-sft-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vicuna-robust2-sft-lora
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9414
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 128
- total_train_batch_size: 2048
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0 | 0 | 1.8516 |
| No log | 0 | 0 | 1.8678 |
| No log | 0 | 0 | 1.9414 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0a0+32f93b1
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Neulvo/distilbert-base-uncased-finetuned-imdb | Neulvo | "2022-03-16T06:05:40Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-16T05:13:17Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4717
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7071 | 1.0 | 157 | 2.4942 |
| 2.5754 | 2.0 | 314 | 2.4235 |
| 2.5426 | 3.0 | 471 | 2.4361 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
huggingtweets/karlousm-whosnina__ | huggingtweets | "2021-06-30T06:12:03Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: en
thumbnail: https://www.huggingtweets.com/karlousm-whosnina__/1625033518783/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1407437739985981444/HOdDoSY4_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1396877840763719684/88N2DjSH_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Nina Thee Pony 🎠 & Karlous</div>
<div style="text-align: center; font-size: 14px;">@karlousm-whosnina__</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Nina Thee Pony 🎠 & Karlous.
| Data | Nina Thee Pony 🎠 | Karlous |
| --- | --- | --- |
| Tweets downloaded | 3210 | 3207 |
| Retweets | 717 | 1736 |
| Short tweets | 833 | 175 |
| Tweets kept | 1660 | 1296 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3qlruxax/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @karlousm-whosnina__'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/pprte8vc) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/pprte8vc/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/karlousm-whosnina__')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
abrar0503/h2ogpt-gm | abrar0503 | "2025-03-10T10:50:18Z" | 89 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-07-12T12:06:46Z" | ---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
license: apache-2.0
datasets:
- OpenAssistant/oasst1
---
# Model Card
## Summary
Try our chatbot here: https://gpt-gm.h2o.ai/
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [openlm-research/open_llama_7b_preview_300bt](https://huggingface.co/openlm-research/open_llama_7b_preview_300bt)
- Dataset preparation: [OpenAssistant/oasst1](https://github.com/h2oai/h2o-llmstudio/blob/1935d84d9caafed3ee686ad2733eb02d2abfce57/app_utils/utils.py#LL1896C5-L1896C28)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` and `torch` libraries installed.
```bash
pip install transformers==4.28.1
pip install torch==2.0.0
```
```python
import torch
from transformers import pipeline
generate_text = pipeline(
model="h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b-preview-300bt-v2",
torch_dtype=torch.float16,
trust_remote_code=True,
use_fast=False,
device_map={"": "cuda:0"},
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=2,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?</s><|answer|>
```
Alternatively, if you prefer to not use `trust_remote_code=True` you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b-preview-300bt-v2",
use_fast=False,
padding_side="left"
)
model = AutoModelForCausalLM.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b-preview-300bt-v2",
torch_dtype=torch.float16,
device_map={"": "cuda:0"}
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=2,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b-preview-300bt-v2" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?</s><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)
model = AutoModelForCausalLM.from_pretrained(model_name)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
**inputs,
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=2,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Model Architecture
```
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(32000, 4096, padding_idx=0)
(layers): ModuleList(
(0-31): 32 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): Linear(in_features=4096, out_features=4096, bias=False)
(k_proj): Linear(in_features=4096, out_features=4096, bias=False)
(v_proj): Linear(in_features=4096, out_features=4096, bias=False)
(o_proj): Linear(in_features=4096, out_features=4096, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear(in_features=4096, out_features=11008, bias=False)
(down_proj): Linear(in_features=11008, out_features=4096, bias=False)
(up_proj): Linear(in_features=4096, out_features=11008, bias=False)
(act_fn): SiLUActivation()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): Linear(in_features=4096, out_features=32000, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it. |
mlfoundations-dev/hp_ablations_mistral_scheduler_cosine_warmup0.10_minlr5e-7_dcftv1.2 | mlfoundations-dev | "2024-12-05T10:32:09Z" | 12 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-04T16:53:53Z" | ---
library_name: transformers
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: hp_ablations_mistral_scheduler_cosine_warmup0.10_minlr5e-7_dcftv1.2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hp_ablations_mistral_scheduler_cosine_warmup0.10_minlr5e-7_dcftv1.2
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the mlfoundations-dev/oh-dcft-v1.2_no-curation_gpt-4o-mini dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0734
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- total_eval_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine_with_min_lr
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5743 | 0.9976 | 369 | 0.0715 |
| 0.4903 | 1.9973 | 738 | 0.0703 |
| 0.4057 | 2.9970 | 1107 | 0.0734 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.3.0
- Datasets 3.0.2
- Tokenizers 0.20.3
|
stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5 | stefan-it | "2023-10-26T11:08:24Z" | 10 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:dbmdz/bert-base-historic-multilingual-64k-td-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-64k-td-cased",
"license:mit",
"region:us"
] | token-classification | "2023-10-24T12:54:19Z" | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-64k-td-cased
widget:
- text: Nous recevons le premier numéro d ' un nouveau journal , le Radical - Libéral
, qui paraîtra à Genève deux fois la semaine . Son but est de représenter l '
élément national du radicalisme genevois , en d ' autres termes , de défendre
la politique intransigeante do M . Carteret , en opposition aux tendances du groupe
_ > dont le Genevois est l ' organe . Bétail .
---
# Fine-tuned Flair Model on French HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT 64k as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|--------------|--------------|------------------|-----------------|
| `bs8-e10-lr3e-05` | [0.8389][1] | [0.8466][2] | [0.8299][3] | [0.8391][4] | [0.8427][5] | 0.8394 ± 0.0062 |
| `bs4-e10-lr3e-05` | [0.8279][6] | [0.8364][7] | [0.8404][8] | [0.8382][9] | [**0.8371**][10] | 0.836 ± 0.0048 |
| `bs8-e10-lr5e-05` | [0.8418][11] | [0.8337][12] | [0.831][13] | [0.8346][14] | [0.8352][15] | 0.8353 ± 0.004 |
| `bs4-e10-lr5e-05` | [0.831][16] | [0.8239][17] | [0.7784][18] | [0.8313][19] | [0.8191][20] | 0.8167 ± 0.022 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-fr-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
junn991/gemma2-2b-it-sft-couple | junn991 | "2024-11-21T14:55:01Z" | 60 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-21T14:52:10Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
olisval/qavito_model | olisval | "2024-10-22T20:52:00Z" | 7 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct",
"region:us"
] | null | "2024-10-20T15:16:11Z" | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0 |
oliverobenrauch/ai-watchultra | oliverobenrauch | "2025-01-26T15:01:14Z" | 8 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-01-26T14:43:10Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: AIWATCHULTRA
---
# Ai Watchultra
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `AIWATCHULTRA` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('oliverobenrauch/ai-watchultra', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
baby-dev/2-20-03 | baby-dev | "2025-02-20T15:52:23Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"opt",
"generated_from_trainer",
"base_model:facebook/opt-125m",
"base_model:adapter:facebook/opt-125m",
"region:us"
] | null | "2025-02-20T15:52:19Z" | ---
library_name: peft
tags:
- generated_from_trainer
base_model: facebook/opt-125m
model-index:
- name: outputs/42/baby-dev/2-20-03
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: /workspace/input_data/facebook/opt-125m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a55243159bc593a3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a55243159bc593a3_train_data.json
type:
field_input: Headline
field_instruction: Link
field_output: Article
field_system: Journalists
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: true
# hub_model_id: baby-dev/2-20-01
# hub_strategy: checkpoint
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: constant
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 100
micro_batch_size: 4
mlflow_experiment_name: /tmp/a55243159bc593a3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: outputs/42/baby-dev/2-20-03
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 512
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ec4daa32-dc59-477e-bbec-56cf29bc685f
wandb_project: SN56-42
wandb_run: your_name
wandb_runid: ec4daa32-dc59-477e-bbec-56cf29bc685f
warmup_steps: 50
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# outputs/42/baby-dev/2-20-03
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3438
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 50
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 2.6642 |
| 10.002 | 0.0070 | 50 | 2.4652 |
| 9.6914 | 0.0141 | 100 | 2.3438 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
OduguSusmitha/llama-3-8b-Instruct-bnb-4bit-updated_json | OduguSusmitha | "2024-05-27T11:57:49Z" | 10 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-05-27T05:45:25Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** OduguSusmitha
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
heyyai/cybertruck01 | heyyai | "2023-05-16T09:30:46Z" | 33 | 0 | diffusers | [
"diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2022-11-21T17:25:12Z" | ---
license: creativeml-openrail-m
tags:
- text-to-image
---
### cybertruck01 Dreambooth model trained by cormacncheese with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures of this concept:
|
ivangrapher/4d7c2a07-2cb0-4c3b-b93c-70160afc04b0 | ivangrapher | "2025-01-15T19:41:09Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m-deduped",
"base_model:adapter:EleutherAI/pythia-70m-deduped",
"license:apache-2.0",
"region:us"
] | null | "2025-01-15T19:40:04Z" | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-70m-deduped
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4d7c2a07-2cb0-4c3b-b93c-70160afc04b0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-70m-deduped
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f9d583cbe4595761_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f9d583cbe4595761_train_data.json
type:
field_input: ''
field_instruction: Human
field_output: Assistant
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 8
gradient_checkpointing: false
group_by_length: false
hub_model_id: ivangrapher/4d7c2a07-2cb0-4c3b-b93c-70160afc04b0
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 70GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/f9d583cbe4595761_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 5
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 79866d34-ead5-4f60-be5b-3064df991a9d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 79866d34-ead5-4f60-be5b-3064df991a9d
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 4d7c2a07-2cb0-4c3b-b93c-70160afc04b0
This model is a fine-tuned version of [EleutherAI/pythia-70m-deduped](https://huggingface.co/EleutherAI/pythia-70m-deduped) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.3897
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0006 | 1 | 5.6642 |
| 45.451 | 0.0049 | 8 | 5.5296 |
| 41.0579 | 0.0098 | 16 | 5.3952 |
| 44.0469 | 0.0147 | 24 | 5.3897 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
MaziyarPanahi/Qwen2.5-14B-YOYO-V5-GGUF | MaziyarPanahi | "2025-03-22T21:21:46Z" | 0 | 0 | null | [
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:YOYO-AI/Qwen2.5-14B-YOYO-V5",
"base_model:quantized:YOYO-AI/Qwen2.5-14B-YOYO-V5",
"region:us",
"conversational"
] | text-generation | "2025-03-22T20:44:13Z" | ---
base_model: YOYO-AI/Qwen2.5-14B-YOYO-V5
inference: false
model_creator: YOYO-AI
model_name: Qwen2.5-14B-YOYO-V5-GGUF
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
---
# [MaziyarPanahi/Qwen2.5-14B-YOYO-V5-GGUF](https://huggingface.co/MaziyarPanahi/Qwen2.5-14B-YOYO-V5-GGUF)
- Model creator: [YOYO-AI](https://huggingface.co/YOYO-AI)
- Original model: [YOYO-AI/Qwen2.5-14B-YOYO-V5](https://huggingface.co/YOYO-AI/Qwen2.5-14B-YOYO-V5)
## Description
[MaziyarPanahi/Qwen2.5-14B-YOYO-V5-GGUF](https://huggingface.co/MaziyarPanahi/Qwen2.5-14B-YOYO-V5-GGUF) contains GGUF format model files for [YOYO-AI/Qwen2.5-14B-YOYO-V5](https://huggingface.co/YOYO-AI/Qwen2.5-14B-YOYO-V5).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
lesso06/2bcb2419-2bab-4c72-a65e-a86de11f53e1 | lesso06 | "2025-01-25T07:28:58Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/codegemma-7b",
"base_model:adapter:unsloth/codegemma-7b",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-25T05:22:19Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/codegemma-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2bcb2419-2bab-4c72-a65e-a86de11f53e1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/codegemma-7b
bf16: true
chat_template: llama3
datasets:
- data_files:
- be3e88192976f3da_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/be3e88192976f3da_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso06/2bcb2419-2bab-4c72-a65e-a86de11f53e1
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/be3e88192976f3da_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fddb0c49-3e86-4e0d-b36c-1f4a529f37f0
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: fddb0c49-3e86-4e0d-b36c-1f4a529f37f0
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 2bcb2419-2bab-4c72-a65e-a86de11f53e1
This model is a fine-tuned version of [unsloth/codegemma-7b](https://huggingface.co/unsloth/codegemma-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4698
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5572 | 0.0001 | 1 | 1.5809 |
| 1.7794 | 0.0006 | 5 | 1.5626 |
| 0.9224 | 0.0012 | 10 | 1.5111 |
| 1.2979 | 0.0017 | 15 | 1.4867 |
| 1.4685 | 0.0023 | 20 | 1.4747 |
| 1.2945 | 0.0029 | 25 | 1.4698 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Roy029/phi-1_5-finetuned-gsm8k | Roy029 | "2023-10-09T07:21:18Z" | 0 | 0 | null | [
"generated_from_trainer",
"base_model:microsoft/phi-1_5",
"base_model:finetune:microsoft/phi-1_5",
"license:other",
"region:us"
] | null | "2023-10-09T07:01:22Z" | ---
license: other
base_model: microsoft/phi-1_5
tags:
- generated_from_trainer
model-index:
- name: phi-1_5-finetuned-gsm8k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-1_5-finetuned-gsm8k
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
FelisDwan/UI-TARS-7B-DPO-Q4_K_M-GGUF | FelisDwan | "2025-01-23T04:41:07Z" | 132 | 0 | transformers | [
"transformers",
"gguf",
"multimodal",
"gui",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"en",
"base_model:bytedance-research/UI-TARS-7B-DPO",
"base_model:quantized:bytedance-research/UI-TARS-7B-DPO",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | image-text-to-text | "2025-01-23T04:40:43Z" | ---
license: apache-2.0
language:
- en
pipeline_tag: image-text-to-text
tags:
- multimodal
- gui
- llama-cpp
- gguf-my-repo
library_name: transformers
base_model: bytedance-research/UI-TARS-7B-DPO
---
# FelisDwan/UI-TARS-7B-DPO-Q4_K_M-GGUF
This model was converted to GGUF format from [`bytedance-research/UI-TARS-7B-DPO`](https://huggingface.co/bytedance-research/UI-TARS-7B-DPO) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/bytedance-research/UI-TARS-7B-DPO) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo FelisDwan/UI-TARS-7B-DPO-Q4_K_M-GGUF --hf-file ui-tars-7b-dpo-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo FelisDwan/UI-TARS-7B-DPO-Q4_K_M-GGUF --hf-file ui-tars-7b-dpo-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo FelisDwan/UI-TARS-7B-DPO-Q4_K_M-GGUF --hf-file ui-tars-7b-dpo-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo FelisDwan/UI-TARS-7B-DPO-Q4_K_M-GGUF --hf-file ui-tars-7b-dpo-q4_k_m.gguf -c 2048
```
|
aayush152/speecht5_finetuned_voxpopuli_nl | aayush152 | "2024-05-05T11:52:38Z" | 77 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | "2024-05-05T07:43:10Z" | ---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4620
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 0.5285 | 2.9895 | 1000 | 0.4830 |
| 0.5076 | 5.9791 | 2000 | 0.4697 |
| 0.5048 | 8.9686 | 3000 | 0.4634 |
| 0.4996 | 11.9581 | 4000 | 0.4620 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
|
YYhnit/ChatGLM3-6B | YYhnit | "2025-03-11T22:20:15Z" | 0 | 0 | flair | [
"flair",
"text-generation",
"arxiv:2103.10360",
"arxiv:2210.02414",
"license:gpl-2.0",
"region:us"
] | text-generation | "2025-03-11T22:19:07Z" | ---
license: gpl-2.0
pipeline_tag: text-generation
library_name: flair
---
---
language:
- zh
- en
tags:
- glm
- chatglm
- thudm
---
# ChatGLM3-6B
<p align="center">
💻 <a href="https://github.com/THUDM/ChatGLM" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/thukeg" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2103.10360" target="_blank">[GLM@ACL 22]</a> <a href="https://github.com/THUDM/GLM" target="_blank">[GitHub]</a> • 📃 <a href="https://arxiv.org/abs/2210.02414" target="_blank">[GLM-130B@ICLR 23]</a> <a href="https://github.com/THUDM/GLM-130B" target="_blank">[GitHub]</a> <br>
</p>
<p align="center">
👋 Join our <a href="https://join.slack.com/t/chatglm/shared_invite/zt-25ti5uohv-A_hs~am_D3Q8XPZMpj7wwQ" target="_blank">Slack</a> and <a href="https://github.com/THUDM/ChatGLM/blob/main/resources/WECHAT.md" target="_blank">WeChat</a>
</p>
<p align="center">
📍Experience the larger-scale ChatGLM model at <a href="https://www.chatglm.cn">chatglm.cn</a>
</p>
## 介绍
ChatGLM3-6B 是 ChatGLM 系列最新一代的开源模型,在保留了前两代模型对话流畅、部署门槛低等众多优秀特性的基础上,ChatGLM3-6B 引入了如下特性:
1. **更强大的基础模型:** ChatGLM3-6B 的基础模型 ChatGLM3-6B-Base 采用了更多样的训练数据、更充分的训练步数和更合理的训练策略。在语义、数学、推理、代码、知识等不同角度的数据集上测评显示,ChatGLM3-6B-Base 具有在 10B 以下的预训练模型中最强的性能。
2. **更完整的功能支持:** ChatGLM3-6B 采用了全新设计的 [Prompt 格式](PROMPT.md),除正常的多轮对话外。同时原生支持[工具调用](tool_using/README.md)(Function Call)、代码执行(Code Interpreter)和 Agent 任务等复杂场景。
3. **更全面的开源序列:** 除了对话模型 ChatGLM3-6B 外,还开源了基础模型 ChatGLM-6B-Base、长文本对话模型 ChatGLM3-6B-32K。以上所有权重对学术研究**完全开放**,在填写[问卷](https://open.bigmodel.cn/mla/form)进行登记后**亦允许免费商业使用**。
## 软件依赖
```shell
pip install protobuf 'transformers>=4.30.2' cpm_kernels 'torch>=2.0' gradio mdtex2html sentencepiece accelerate
```
## 模型下载
modelscope API下载
```shell
pip install modelscope
```
```python
from modelscope import snapshot_download
model_dir = snapshot_download("ZhipuAI/chatglm3-6b", revision = "v1.0.0")
```
git下载
```shell
git lfs install
git clone https://www.modelscope.cn/ZhipuAI/chatglm3-6b.git
```
## 代码调用
可以通过如下代码调用 ChatGLM3-6B 模型来生成对话:
```python
from modelscope import AutoTokenizer, AutoModel, snapshot_download
model_dir = snapshot_download("ZhipuAI/chatglm3-6b", revision = "v1.0.0")
tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True)
model = AutoModel.from_pretrained(model_dir, trust_remote_code=True).half().cuda()
model = model.eval()
response, history = model.chat(tokenizer, "你好", history=[])
print(response)
response, history = model.chat(tokenizer, "晚上睡不着应该怎么办", history=history)
print(response)
```
关于更多的使用说明,包括如何运行命令行和网页版本的 DEMO,以及使用模型量化以节省显存,请参考我们的 [Github Repo](https://github.com/THUDM/ChatGLM3)。
For more instructions, including how to run CLI and web demos, and model quantization, please refer to our [Github Repo](https://github.com/THUDM/ChatGLM3).
**即使您没有满足要求的 CUDA 设备,对于 Intel CPU 和 GPU 设备,也可以使用 [OpenVINO加速框架](https://github.com/openvinotoolkit) 使用 Intel GPU 或 CPU 或 集成显卡 加速部署ChatGLM3-6B模型**,
我们也在[Github Repo](https://github.com/THUDM/ChatGLM3/blob/main/Intel_device_demo/openvino_demo/README.md) 准备了demo。
## 协议
本仓库的代码依照 [Apache-2.0](LICENSE) 协议开源,ChatGLM3-6B 模型的权重的使用则需要遵循 [Model License](MODEL_LICENSE)。
## 引用
如果你觉得我们的工作有帮助的话,请考虑引用下列论文。
```
@article{zeng2022glm,
title={Glm-130b: An open bilingual pre-trained model},
author={Zeng, Aohan and Liu, Xiao and Du, Zhengxiao and Wang, Zihan and Lai, Hanyu and Ding, Ming and Yang, Zhuoyi and Xu, Yifan and Zheng, Wendi and Xia, Xiao and others},
journal={arXiv preprint arXiv:2210.02414},
year={2022}
}
```
```
@inproceedings{du2022glm,
title={GLM: General Language Model Pretraining with Autoregressive Blank Infilling},
author={Du, Zhengxiao and Qian, Yujie and Liu, Xiao and Ding, Ming and Qiu, Jiezhong and Yang, Zhilin and Tang, Jie},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
pages={320--335},
year={2022}
}
``` |
silveroxides/Chroma-GGUF | silveroxides | "2025-04-16T06:12:48Z" | 7,934 | 17 | null | [
"gguf",
"text-to-image",
"base_model:lodestones/Chroma",
"base_model:quantized:lodestones/Chroma",
"license:apache-2.0",
"region:us"
] | text-to-image | "2025-02-24T13:07:36Z" | ---
license: apache-2.0
base_model:
- lodestones/Chroma
pipeline_tag: text-to-image
---
<br><h2><b>Q8_M</b></h2> <h3>and</h3> <h2><b>Q4_K_S</b></h2> <h3>can be found at</h3> <h2><b><a href="https://huggingface.co/Clybius/Chroma-GGUF">Clybius/Chroma-GGUF</a></h2></b>
<div id="banner">
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-BF16.gguf">BF16</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/vWu52TewcRCC2WGudOVbB.png" height=192 width=192>
</div>
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-Q8_0.gguf">Q8_0</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/lxlCKpfkKhYkN7sqfMRqL.png" height=192 width=192>
</div>
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-Q6_K.gguf">Q6_K</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/vS3T3DICIKgQj66Vo9vRJ.png" height=192 width=192>
</div>
</div>
<br>
<div id="banner">
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-Q5_1.gguf">Q5_1</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/juyZLbU5ndk-qH0UuSN94.png" height=192 width=192>
</div>
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-Q5_0.gguf">Q5_0</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/e3DV-W6d8dacODHV6iQxE.png" height=192 width=192>
</div>
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-Q5_K_S.gguf">Q5_K_S</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/RJMyAod5l9B00W0byua7Q.png" height=192 width=192>
</div>
</div>
<br>
<div id="banner">
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-Q4_1.gguf">Q4_1</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/PHALUDJ6v7j9e-gCAOrLF.png" height=192 width=192>
</div>
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-Q4_K_M.gguf">Q4_K_M</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/tkNif9yvI-HDkwe9hFbzP.png" height=192 width=192>
</div>
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-Q4_0.gguf">Q4_0</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/raF3wPpYjZfJa_SXr1FLq.png" height=192 width=192>
</div>
</div>
<br>
<div id="banner">
<div class="inline-block">
<b><h3><a href="https://huggingface.co/silveroxides/Chroma-GGUF/blob/main/chroma-unlocked-v10/chroma-unlocked-v10-Q3_K_L.gguf">Q3_K_L</a></h3></b><img src="https://cdn-uploads.huggingface.co/production/uploads/64159ad9986557e8cac2e333/V4PflwbKdHDgdfQJri1ko.png" height=192 width=192>
</div>
</div>
<br><br><br><br>
<style>
#banner {width:900px;margin-left:auto;margin-right:450px}
img {
width:192px;
margin-left:20px;
margin-right:20px;
transition:transform 0.25s ease;
}
img:hover {
-webkit-transform:scale(3); /* or some other value */
transform:scale(3);
}
</style> |
nhung01/1299c256-1e34-4312-b825-32098a805008 | nhung01 | "2025-01-25T12:41:00Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-instruct-v0.2",
"base_model:adapter:unsloth/mistral-7b-instruct-v0.2",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-25T12:09:44Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-instruct-v0.2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1299c256-1e34-4312-b825-32098a805008
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-instruct-v0.2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c0265fc94ea38360_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c0265fc94ea38360_train_data.json
type:
field_input: input
field_instruction: question
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung01/1299c256-1e34-4312-b825-32098a805008
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/c0265fc94ea38360_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 160c4e79-ac5a-4f6f-93a9-af630eb6c0d8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 160c4e79-ac5a-4f6f-93a9-af630eb6c0d8
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 1299c256-1e34-4312-b825-32098a805008
This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.2](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4770
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.475 | 0.1268 | 200 | 0.4770 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ITT-AF/ITT-42dot_LLM-PLM-1.3B-v3.0 | ITT-AF | "2024-02-14T06:32:35Z" | 145 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-01-30T15:04:18Z" | ---
license: cc-by-nc-4.0
---
# ITT-AF/ITT-42dot_LLM-PLM-1.3B-v3.0
This model is a fine-tuned version of [42dot/42dot_LLM-PLM-1.3B](https://huggingface.co/42dot/42dot_LLM-PLM-1.3B) on an custom dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 96
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.0.0
- Tokenizers 0.15.0 |
genki10/ASAP_FineTuningBERT_AugV14_k5_task1_organization_k5_k5_fold0 | genki10 | "2025-02-20T02:37:18Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-02-20T02:19:27Z" | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: ASAP_FineTuningBERT_AugV14_k5_task1_organization_k5_k5_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASAP_FineTuningBERT_AugV14_k5_task1_organization_k5_k5_fold0
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8273
- Qwk: 0.4218
- Mse: 0.8273
- Rmse: 0.9096
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 1.0 | 8 | 5.9637 | 0.0120 | 5.9637 | 2.4421 |
| No log | 2.0 | 16 | 2.6255 | 0.0 | 2.6255 | 1.6203 |
| No log | 3.0 | 24 | 1.1015 | 0.0316 | 1.1015 | 1.0495 |
| No log | 4.0 | 32 | 0.7837 | 0.3621 | 0.7837 | 0.8853 |
| No log | 5.0 | 40 | 0.7147 | 0.4536 | 0.7147 | 0.8454 |
| No log | 6.0 | 48 | 0.6219 | 0.4390 | 0.6219 | 0.7886 |
| No log | 7.0 | 56 | 0.7127 | 0.4067 | 0.7127 | 0.8442 |
| No log | 8.0 | 64 | 0.7120 | 0.5027 | 0.7120 | 0.8438 |
| No log | 9.0 | 72 | 0.6521 | 0.4848 | 0.6521 | 0.8075 |
| No log | 10.0 | 80 | 0.8328 | 0.4095 | 0.8328 | 0.9126 |
| No log | 11.0 | 88 | 0.7633 | 0.4233 | 0.7633 | 0.8736 |
| No log | 12.0 | 96 | 0.8018 | 0.3887 | 0.8018 | 0.8954 |
| No log | 13.0 | 104 | 0.8270 | 0.4328 | 0.8270 | 0.9094 |
| No log | 14.0 | 112 | 0.7459 | 0.4733 | 0.7459 | 0.8637 |
| No log | 15.0 | 120 | 0.7150 | 0.4591 | 0.7150 | 0.8456 |
| No log | 16.0 | 128 | 0.6959 | 0.4701 | 0.6959 | 0.8342 |
| No log | 17.0 | 136 | 0.7889 | 0.4315 | 0.7889 | 0.8882 |
| No log | 18.0 | 144 | 0.8963 | 0.3777 | 0.8963 | 0.9467 |
| No log | 19.0 | 152 | 0.6896 | 0.4649 | 0.6896 | 0.8304 |
| No log | 20.0 | 160 | 0.7278 | 0.4641 | 0.7278 | 0.8531 |
| No log | 21.0 | 168 | 0.6946 | 0.4912 | 0.6946 | 0.8334 |
| No log | 22.0 | 176 | 0.6924 | 0.4943 | 0.6924 | 0.8321 |
| No log | 23.0 | 184 | 0.9489 | 0.3718 | 0.9489 | 0.9741 |
| No log | 24.0 | 192 | 0.8311 | 0.3935 | 0.8311 | 0.9116 |
| No log | 25.0 | 200 | 0.7381 | 0.4456 | 0.7381 | 0.8591 |
| No log | 26.0 | 208 | 0.7380 | 0.4496 | 0.7380 | 0.8591 |
| No log | 27.0 | 216 | 0.7915 | 0.4369 | 0.7915 | 0.8897 |
| No log | 28.0 | 224 | 0.8273 | 0.4218 | 0.8273 | 0.9096 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
Stardragon2099/florencetrial-11e | Stardragon2099 | "2024-12-17T05:38:38Z" | 105 | 0 | transformers | [
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-12-17T05:36:20Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
stefanylial/Astro_Bin | stefanylial | "2023-07-10T06:41:43Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-07-10T06:41:43Z" | ---
license: creativeml-openrail-m
---
|
mradermacher/sixtyoneeighty-4x7B-v2-GGUF | mradermacher | "2024-12-16T03:32:50Z" | 94 | 0 | transformers | [
"transformers",
"gguf",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"Open-Orca/Mistral-7B-OpenOrca",
"NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story",
"S-miguel/The-Trinity-Coder-7B",
"chihoonlee10/T3Q-Mistral-Orca-Math-DPO",
"en",
"base_model:sixtyoneeighty/FNCARL9000",
"base_model:quantized:sixtyoneeighty/FNCARL9000",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-04-06T08:43:13Z" | ---
base_model: sixtyoneeighty/FNCARL9000
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- Open-Orca/Mistral-7B-OpenOrca
- NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story
- S-miguel/The-Trinity-Coder-7B
- chihoonlee10/T3Q-Mistral-Orca-Math-DPO
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/sixtyoneeighty/FNCARL9000
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/sixtyoneeighty-4x7B-v2-GGUF/resolve/main/sixtyoneeighty-4x7B-v2.Q2_K.gguf) | Q2_K | 8.9 | |
| [GGUF](https://huggingface.co/mradermacher/sixtyoneeighty-4x7B-v2-GGUF/resolve/main/sixtyoneeighty-4x7B-v2.IQ3_XS.gguf) | IQ3_XS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/sixtyoneeighty-4x7B-v2-GGUF/resolve/main/sixtyoneeighty-4x7B-v2.Q3_K_S.gguf) | Q3_K_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/sixtyoneeighty-4x7B-v2-GGUF/resolve/main/sixtyoneeighty-4x7B-v2.IQ3_S.gguf) | IQ3_S | 10.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/sixtyoneeighty-4x7B-v2-GGUF/resolve/main/sixtyoneeighty-4x7B-v2.IQ3_M.gguf) | IQ3_M | 10.7 | |
| [GGUF](https://huggingface.co/mradermacher/sixtyoneeighty-4x7B-v2-GGUF/resolve/main/sixtyoneeighty-4x7B-v2.Q3_K_M.gguf) | Q3_K_M | 11.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/sixtyoneeighty-4x7B-v2-GGUF/resolve/main/sixtyoneeighty-4x7B-v2.Q3_K_L.gguf) | Q3_K_L | 12.6 | |
| [GGUF](https://huggingface.co/mradermacher/sixtyoneeighty-4x7B-v2-GGUF/resolve/main/sixtyoneeighty-4x7B-v2.IQ4_XS.gguf) | IQ4_XS | 13.1 | |
| [GGUF](https://huggingface.co/mradermacher/sixtyoneeighty-4x7B-v2-GGUF/resolve/main/sixtyoneeighty-4x7B-v2.Q4_K_S.gguf) | Q4_K_S | 13.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/sixtyoneeighty-4x7B-v2-GGUF/resolve/main/sixtyoneeighty-4x7B-v2.Q4_K_M.gguf) | Q4_K_M | 14.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/sixtyoneeighty-4x7B-v2-GGUF/resolve/main/sixtyoneeighty-4x7B-v2.Q5_K_S.gguf) | Q5_K_S | 16.7 | |
| [GGUF](https://huggingface.co/mradermacher/sixtyoneeighty-4x7B-v2-GGUF/resolve/main/sixtyoneeighty-4x7B-v2.Q5_K_M.gguf) | Q5_K_M | 17.2 | |
| [GGUF](https://huggingface.co/mradermacher/sixtyoneeighty-4x7B-v2-GGUF/resolve/main/sixtyoneeighty-4x7B-v2.Q6_K.gguf) | Q6_K | 19.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/sixtyoneeighty-4x7B-v2-GGUF/resolve/main/sixtyoneeighty-4x7B-v2.Q8_0.gguf) | Q8_0 | 25.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ByeByeFlyGuy/ppo-LunarLander-v2 | ByeByeFlyGuy | "2024-05-06T22:49:58Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-05-06T22:49:39Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 285.59 +/- 26.29
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ArtYac/q-Taxi-v3 | ArtYac | "2023-01-19T20:11:41Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-01-19T20:11:37Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.38 +/- 2.80
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ArtYac/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
farenassr/my-autotrain-llm-2 | farenassr | "2024-05-13T14:05:43Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-13T14:05:20Z" | ---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
nhung01/9c9bad96-2819-4093-8dbd-17e9abe4d905 | nhung01 | "2025-01-12T17:39:24Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"opt",
"axolotl",
"generated_from_trainer",
"base_model:facebook/opt-350m",
"base_model:adapter:facebook/opt-350m",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-12T17:31:18Z" | ---
library_name: peft
license: other
base_model: facebook/opt-350m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9c9bad96-2819-4093-8dbd-17e9abe4d905
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: facebook/opt-350m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 724497f4649e38f0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/724497f4649e38f0_train_data.json
type:
field_instruction: issue
field_output: post_text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung01/9c9bad96-2819-4093-8dbd-17e9abe4d905
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/724497f4649e38f0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8a17b501-74b5-418a-b3cb-dda3108803a4
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8a17b501-74b5-418a-b3cb-dda3108803a4
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 9c9bad96-2819-4093-8dbd-17e9abe4d905
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2849
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 13.4616 | 0.8457 | 200 | 3.2849 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
TalentoTechIA/Martin | TalentoTechIA | "2025-01-21T01:21:38Z" | 8 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2025-01-21T01:11:41Z" | ---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Martin
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Martin
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0169
- Accuracy: 0.9925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.1315 | 3.8462 | 500 | 0.0169 | 0.9925 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
crumb/d38a14-32h-16d | crumb | "2024-08-30T11:45:04Z" | 113 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-08-30T11:44:25Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
albertus-sussex/veriscrape-simcse-university-reference_5_to_verify_5-fold-7 | albertus-sussex | "2025-03-28T14:14:25Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2025-03-28T14:14:01Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
texanrangee/be3fbad2-0982-4d68-a70e-d38184258cb7 | texanrangee | "2025-02-27T03:17:39Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-02-27T02:19:33Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tscstudios/iwal7zawwerd8k7vjzyubn9guup1_aaf61d1c-7e2c-4c3b-a8ec-f17c2bd97360 | tscstudios | "2025-03-14T03:47:15Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-03-14T03:47:12Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Iwal7Zawwerd8K7Vjzyubn9Guup1_Aaf61D1C 7E2C 4C3B A8Ec F17C2Bd97360
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('tscstudios/iwal7zawwerd8k7vjzyubn9guup1_aaf61d1c-7e2c-4c3b-a8ec-f17c2bd97360', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
xueyj/task-1-Qwen-Qwen1.5-7B | xueyj | "2025-02-03T12:39:06Z" | 2,207 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-7B",
"base_model:adapter:Qwen/Qwen1.5-7B",
"region:us"
] | null | "2025-01-03T05:44:49Z" | ---
base_model: Qwen/Qwen1.5-7B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0 |
varun-v-rao/bert-base-cased-mnli-model9 | varun-v-rao | "2024-01-19T14:26:31Z" | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-01-19T13:00:06Z" | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-cased-mnli-model9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-mnli-model9
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4753
- Accuracy: 0.8367
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 95
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4636 | 1.0 | 6136 | 0.4446 | 0.8293 |
| 0.3553 | 2.0 | 12272 | 0.4408 | 0.8334 |
| 0.2627 | 3.0 | 18408 | 0.4753 | 0.8367 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
huggingtweets/brandi_love | huggingtweets | "2023-04-14T10:32:39Z" | 104 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-04-14T10:32:31Z" | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1609624979296948225/DituWr39_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Brandi Love ®</div>
<div style="text-align: center; font-size: 14px;">@brandi_love</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Brandi Love ®.
| Data | Brandi Love ® |
| --- | --- |
| Tweets downloaded | 2627 |
| Retweets | 607 |
| Short tweets | 223 |
| Tweets kept | 1797 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/j2qmrzxs/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @brandi_love's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/rd9gf4dw) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/rd9gf4dw/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/brandi_love')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
QuantFactory/HuatuoGPT-o1-7B-GGUF | QuantFactory | "2025-01-03T07:08:44Z" | 656 | 4 | null | [
"gguf",
"medical",
"text-generation",
"en",
"zh",
"dataset:FreedomIntelligence/medical-o1-reasoning-SFT",
"dataset:FreedomIntelligence/medical-o1-verifiable-problem",
"arxiv:2412.18925",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-01-03T06:29:50Z" |
---
license: apache-2.0
datasets:
- FreedomIntelligence/medical-o1-reasoning-SFT
- FreedomIntelligence/medical-o1-verifiable-problem
language:
- en
- zh
base_model:
- Qwen/Qwen2.5-7B-Instruct
pipeline_tag: text-generation
tags:
- medical
---
[](https://hf.co/QuantFactory)
# QuantFactory/HuatuoGPT-o1-7B-GGUF
This is quantized version of [FreedomIntelligence/HuatuoGPT-o1-7B](https://huggingface.co/FreedomIntelligence/HuatuoGPT-o1-7B) created using llama.cpp
# Original Model Card
<div align="center">
<h1>
HuatuoGPT-o1-7B
</h1>
</div>
<div align="center">
<a href="https://github.com/FreedomIntelligence/HuatuoGPT-o1" target="_blank">GitHub</a> | <a href="https://arxiv.org/pdf/2412.18925" target="_blank">Paper</a>
</div>
# <span>Introduction</span>
**HuatuoGPT-o1** is a medical LLM designed for advanced medical reasoning. It generates a complex thought process, reflecting and refining its reasoning, before providing a final response.
For more information, visit our GitHub repository:
[https://github.com/FreedomIntelligence/HuatuoGPT-o1](https://github.com/FreedomIntelligence/HuatuoGPT-o1).
# <span>Model Info</span>
| | Backbone | Supported Languages | Link |
| -------------------- | ------------ | ----- | --------------------------------------------------------------------- |
| **HuatuoGPT-o1-8B** | LLaMA-3.1-8B | English | [HF Link](https://huggingface.co/FreedomIntelligence/HuatuoGPT-o1-8B) |
| **HuatuoGPT-o1-70B** | LLaMA-3.1-70B | English | [HF Link](https://huggingface.co/FreedomIntelligence/HuatuoGPT-o1-70B) |
| **HuatuoGPT-o1-7B** | Qwen2.5-7B | English & Chinese | [HF Link](https://huggingface.co/FreedomIntelligence/HuatuoGPT-o1-7B) |
| **HuatuoGPT-o1-72B** | Qwen2.5-72B | English & Chinese | [HF Link](https://huggingface.co/FreedomIntelligence/HuatuoGPT-o1-72B) |
# <span>Usage</span>
You can use HuatuoGPT-o1-7B in the same way as `Qwen2.5-7B-Instruct`. You can deploy it with tools like [vllm](https://github.com/vllm-project/vllm) or [Sglang](https://github.com/sgl-project/sglang), or perform direct inference:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("FreedomIntelligence/HuatuoGPT-o1-7B",torch_dtype="auto",device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("FreedomIntelligence/HuatuoGPT-o1-7B")
input_text = "How to stop a cough?"
messages = [{"role": "user", "content": input_text}]
inputs = tokenizer(tokenizer.apply_chat_template(messages, tokenize=False,add_generation_prompt=True
), return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=2048)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
HuatuoGPT-o1 adopts a *thinks-before-it-answers* approach, with outputs formatted as:
```
## Thinking
[Reasoning process]
## Final Response
[Output]
```
# <span>📖 Citation</span>
```
@misc{chen2024huatuogpto1medicalcomplexreasoning,
title={HuatuoGPT-o1, Towards Medical Complex Reasoning with LLMs},
author={Junying Chen and Zhenyang Cai and Ke Ji and Xidong Wang and Wanlong Liu and Rongsheng Wang and Jianye Hou and Benyou Wang},
year={2024},
eprint={2412.18925},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2412.18925},
}
```
|
RichardErkhov/pcuenq_-_Qwen2.5-0.5B-Instruct-with-new-merges-serialization-gguf | RichardErkhov | "2024-10-29T16:11:34Z" | 75 | 0 | null | [
"gguf",
"arxiv:2407.10671",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-10-29T15:47:49Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen2.5-0.5B-Instruct-with-new-merges-serialization - GGUF
- Model creator: https://huggingface.co/pcuenq/
- Original model: https://huggingface.co/pcuenq/Qwen2.5-0.5B-Instruct-with-new-merges-serialization/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Qwen2.5-0.5B-Instruct-with-new-merges-serialization.Q2_K.gguf](https://huggingface.co/RichardErkhov/pcuenq_-_Qwen2.5-0.5B-Instruct-with-new-merges-serialization-gguf/blob/main/Qwen2.5-0.5B-Instruct-with-new-merges-serialization.Q2_K.gguf) | Q2_K | 0.32GB |
| [Qwen2.5-0.5B-Instruct-with-new-merges-serialization.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/pcuenq_-_Qwen2.5-0.5B-Instruct-with-new-merges-serialization-gguf/blob/main/Qwen2.5-0.5B-Instruct-with-new-merges-serialization.Q3_K_S.gguf) | Q3_K_S | 0.32GB |
| [Qwen2.5-0.5B-Instruct-with-new-merges-serialization.Q3_K.gguf](https://huggingface.co/RichardErkhov/pcuenq_-_Qwen2.5-0.5B-Instruct-with-new-merges-serialization-gguf/blob/main/Qwen2.5-0.5B-Instruct-with-new-merges-serialization.Q3_K.gguf) | Q3_K | 0.33GB |
| [Qwen2.5-0.5B-Instruct-with-new-merges-serialization.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/pcuenq_-_Qwen2.5-0.5B-Instruct-with-new-merges-serialization-gguf/blob/main/Qwen2.5-0.5B-Instruct-with-new-merges-serialization.Q3_K_M.gguf) | Q3_K_M | 0.33GB |
| [Qwen2.5-0.5B-Instruct-with-new-merges-serialization.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/pcuenq_-_Qwen2.5-0.5B-Instruct-with-new-merges-serialization-gguf/blob/main/Qwen2.5-0.5B-Instruct-with-new-merges-serialization.Q3_K_L.gguf) | Q3_K_L | 0.34GB |
| [Qwen2.5-0.5B-Instruct-with-new-merges-serialization.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/pcuenq_-_Qwen2.5-0.5B-Instruct-with-new-merges-serialization-gguf/blob/main/Qwen2.5-0.5B-Instruct-with-new-merges-serialization.IQ4_XS.gguf) | IQ4_XS | 0.33GB |
| [Qwen2.5-0.5B-Instruct-with-new-merges-serialization.Q4_0.gguf](https://huggingface.co/RichardErkhov/pcuenq_-_Qwen2.5-0.5B-Instruct-with-new-merges-serialization-gguf/blob/main/Qwen2.5-0.5B-Instruct-with-new-merges-serialization.Q4_0.gguf) | Q4_0 | 0.33GB |
| [Qwen2.5-0.5B-Instruct-with-new-merges-serialization.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/pcuenq_-_Qwen2.5-0.5B-Instruct-with-new-merges-serialization-gguf/blob/main/Qwen2.5-0.5B-Instruct-with-new-merges-serialization.IQ4_NL.gguf) | IQ4_NL | 0.33GB |
| [Qwen2.5-0.5B-Instruct-with-new-merges-serialization.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/pcuenq_-_Qwen2.5-0.5B-Instruct-with-new-merges-serialization-gguf/blob/main/Qwen2.5-0.5B-Instruct-with-new-merges-serialization.Q4_K_S.gguf) | Q4_K_S | 0.36GB |
| [Qwen2.5-0.5B-Instruct-with-new-merges-serialization.Q4_K.gguf](https://huggingface.co/RichardErkhov/pcuenq_-_Qwen2.5-0.5B-Instruct-with-new-merges-serialization-gguf/blob/main/Qwen2.5-0.5B-Instruct-with-new-merges-serialization.Q4_K.gguf) | Q4_K | 0.37GB |
| [Qwen2.5-0.5B-Instruct-with-new-merges-serialization.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/pcuenq_-_Qwen2.5-0.5B-Instruct-with-new-merges-serialization-gguf/blob/main/Qwen2.5-0.5B-Instruct-with-new-merges-serialization.Q4_K_M.gguf) | Q4_K_M | 0.37GB |
| [Qwen2.5-0.5B-Instruct-with-new-merges-serialization.Q4_1.gguf](https://huggingface.co/RichardErkhov/pcuenq_-_Qwen2.5-0.5B-Instruct-with-new-merges-serialization-gguf/blob/main/Qwen2.5-0.5B-Instruct-with-new-merges-serialization.Q4_1.gguf) | Q4_1 | 0.35GB |
| [Qwen2.5-0.5B-Instruct-with-new-merges-serialization.Q5_0.gguf](https://huggingface.co/RichardErkhov/pcuenq_-_Qwen2.5-0.5B-Instruct-with-new-merges-serialization-gguf/blob/main/Qwen2.5-0.5B-Instruct-with-new-merges-serialization.Q5_0.gguf) | Q5_0 | 0.37GB |
| [Qwen2.5-0.5B-Instruct-with-new-merges-serialization.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/pcuenq_-_Qwen2.5-0.5B-Instruct-with-new-merges-serialization-gguf/blob/main/Qwen2.5-0.5B-Instruct-with-new-merges-serialization.Q5_K_S.gguf) | Q5_K_S | 0.38GB |
| [Qwen2.5-0.5B-Instruct-with-new-merges-serialization.Q5_K.gguf](https://huggingface.co/RichardErkhov/pcuenq_-_Qwen2.5-0.5B-Instruct-with-new-merges-serialization-gguf/blob/main/Qwen2.5-0.5B-Instruct-with-new-merges-serialization.Q5_K.gguf) | Q5_K | 0.39GB |
| [Qwen2.5-0.5B-Instruct-with-new-merges-serialization.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/pcuenq_-_Qwen2.5-0.5B-Instruct-with-new-merges-serialization-gguf/blob/main/Qwen2.5-0.5B-Instruct-with-new-merges-serialization.Q5_K_M.gguf) | Q5_K_M | 0.39GB |
| [Qwen2.5-0.5B-Instruct-with-new-merges-serialization.Q5_1.gguf](https://huggingface.co/RichardErkhov/pcuenq_-_Qwen2.5-0.5B-Instruct-with-new-merges-serialization-gguf/blob/main/Qwen2.5-0.5B-Instruct-with-new-merges-serialization.Q5_1.gguf) | Q5_1 | 0.39GB |
| [Qwen2.5-0.5B-Instruct-with-new-merges-serialization.Q6_K.gguf](https://huggingface.co/RichardErkhov/pcuenq_-_Qwen2.5-0.5B-Instruct-with-new-merges-serialization-gguf/blob/main/Qwen2.5-0.5B-Instruct-with-new-merges-serialization.Q6_K.gguf) | Q6_K | 0.47GB |
| [Qwen2.5-0.5B-Instruct-with-new-merges-serialization.Q8_0.gguf](https://huggingface.co/RichardErkhov/pcuenq_-_Qwen2.5-0.5B-Instruct-with-new-merges-serialization-gguf/blob/main/Qwen2.5-0.5B-Instruct-with-new-merges-serialization.Q8_0.gguf) | Q8_0 | 0.49GB |
Original model description:
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-0.5B
tags:
- chat
library_name: transformers
---
# Qwen2.5-0.5B-Instruct
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the instruction-tuned 0.5B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 0.49B
- Number of Paramaters (Non-Embedding): 0.36B
- Number of Layers: 24
- Number of Attention Heads (GQA): 14 for Q and 2 for KV
- Context Length: Full 32,768 tokens and generation 8192 tokens
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-0.5B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
```
|
adooo/bigmodels | adooo | "2023-06-27T03:20:33Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-05-04T08:37:52Z" | ---
license: openrail
---
<img src="https://huggingface.co/adooo/bigmodels/resolve/main/NSX-1-EzBackground-pruned.png">NSX-1-EzBackground-pruned<br>
|
oussfr12/AJF | oussfr12 | "2025-01-26T17:06:38Z" | 8 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-01-26T16:40:11Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: AJF
---
# Ajf
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `AJF` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('oussfr12/AJF', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
mradermacher/Llama-2-7b-sft-spin-4k-GGUF | mradermacher | "2024-12-13T07:15:18Z" | 31 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"dpo",
"en",
"base_model:AmberYifan/Llama-2-7b-sft-spin-4k",
"base_model:quantized:AmberYifan/Llama-2-7b-sft-spin-4k",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-12-13T06:49:00Z" | ---
base_model: AmberYifan/Llama-2-7b-sft-spin-4k
language:
- en
library_name: transformers
model_name: Llama-2-7b-sft-spin-4k
quantized_by: mradermacher
tags:
- generated_from_trainer
- trl
- dpo
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/AmberYifan/Llama-2-7b-sft-spin-4k
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-sft-spin-4k-GGUF/resolve/main/Llama-2-7b-sft-spin-4k.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-sft-spin-4k-GGUF/resolve/main/Llama-2-7b-sft-spin-4k.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-sft-spin-4k-GGUF/resolve/main/Llama-2-7b-sft-spin-4k.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-sft-spin-4k-GGUF/resolve/main/Llama-2-7b-sft-spin-4k.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-sft-spin-4k-GGUF/resolve/main/Llama-2-7b-sft-spin-4k.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-sft-spin-4k-GGUF/resolve/main/Llama-2-7b-sft-spin-4k.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-sft-spin-4k-GGUF/resolve/main/Llama-2-7b-sft-spin-4k.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-sft-spin-4k-GGUF/resolve/main/Llama-2-7b-sft-spin-4k.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-sft-spin-4k-GGUF/resolve/main/Llama-2-7b-sft-spin-4k.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-sft-spin-4k-GGUF/resolve/main/Llama-2-7b-sft-spin-4k.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-sft-spin-4k-GGUF/resolve/main/Llama-2-7b-sft-spin-4k.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-7b-sft-spin-4k-GGUF/resolve/main/Llama-2-7b-sft-spin-4k.f16.gguf) | f16 | 13.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
pythainlp/thaitts-onnx | pythainlp | "2024-01-24T06:11:35Z" | 0 | 0 | null | [
"onnx",
"th",
"license:apache-2.0",
"region:us"
] | null | "2024-01-24T06:08:57Z" | ---
license: apache-2.0
language:
- th
---
# thaitts-onnx
Thai Text-to-speech by ONNX runtime
See model: [https://github.com/PyThaiNLP/thaitts-onnx](https://github.com/PyThaiNLP/thaitts-onnx) |
ProbeX/Model-J__SupViT__model_idx_0195 | ProbeX | "2025-04-15T09:09:21Z" | 0 | 0 | null | [
"safetensors",
"vit",
"region:us"
] | null | "2025-04-15T09:09:07Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
davidschulte/ESM_moroco_moroco | davidschulte | "2025-03-28T12:31:25Z" | 25 | 0 | null | [
"safetensors",
"embedding_space_map",
"BaseLM:bert-base-multilingual-uncased",
"dataset:universityofbucharest/moroco",
"base_model:google-bert/bert-base-multilingual-uncased",
"base_model:finetune:google-bert/bert-base-multilingual-uncased",
"license:apache-2.0",
"region:us"
] | null | "2024-12-02T16:33:35Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
adolf669/segformer-finetuned-sidewalk-10k-steps | adolf669 | "2024-11-24T11:50:09Z" | 188 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"segformer",
"image-segmentation",
"vision",
"generated_from_trainer",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | "2024-11-23T10:19:24Z" | ---
library_name: transformers
license: other
base_model: nvidia/mit-b0
tags:
- image-segmentation
- vision
- generated_from_trainer
model-index:
- name: segformer-finetuned-sidewalk-10k-steps
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-finetuned-sidewalk-10k-steps
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5890
- Mean Iou: 0.3100
- Mean Accuracy: 0.3785
- Overall Accuracy: 0.8340
- Accuracy Unlabeled: nan
- Accuracy Flat-road: 0.7750
- Accuracy Flat-sidewalk: 0.9476
- Accuracy Flat-crosswalk: 0.7198
- Accuracy Flat-cyclinglane: 0.8649
- Accuracy Flat-parkingdriveway: 0.2833
- Accuracy Flat-railtrack: nan
- Accuracy Flat-curb: 0.4461
- Accuracy Human-person: 0.6623
- Accuracy Human-rider: 0.0
- Accuracy Vehicle-car: 0.9432
- Accuracy Vehicle-truck: 0.0
- Accuracy Vehicle-bus: 0.0
- Accuracy Vehicle-tramtrain: 0.0
- Accuracy Vehicle-motorcycle: 0.0
- Accuracy Vehicle-bicycle: 0.7612
- Accuracy Vehicle-caravan: 0.0
- Accuracy Vehicle-cartrailer: 0.0
- Accuracy Construction-building: 0.8657
- Accuracy Construction-door: 0.0
- Accuracy Construction-wall: 0.5605
- Accuracy Construction-fenceguardrail: 0.5325
- Accuracy Construction-bridge: 0.0
- Accuracy Construction-tunnel: nan
- Accuracy Construction-stairs: 0.0
- Accuracy Object-pole: 0.3362
- Accuracy Object-trafficsign: 0.0
- Accuracy Object-trafficlight: 0.0
- Accuracy Nature-vegetation: 0.8960
- Accuracy Nature-terrain: 0.8673
- Accuracy Sky: 0.9590
- Accuracy Void-ground: 0.0
- Accuracy Void-dynamic: 0.3077
- Accuracy Void-static: 0.3834
- Accuracy Void-unclear: 0.0
- Iou Unlabeled: nan
- Iou Flat-road: 0.6597
- Iou Flat-sidewalk: 0.8220
- Iou Flat-crosswalk: 0.6669
- Iou Flat-cyclinglane: 0.7094
- Iou Flat-parkingdriveway: 0.2634
- Iou Flat-railtrack: nan
- Iou Flat-curb: 0.2962
- Iou Human-person: 0.4612
- Iou Human-rider: 0.0
- Iou Vehicle-car: 0.8019
- Iou Vehicle-truck: 0.0
- Iou Vehicle-bus: 0.0
- Iou Vehicle-tramtrain: 0.0
- Iou Vehicle-motorcycle: 0.0
- Iou Vehicle-bicycle: 0.4166
- Iou Vehicle-caravan: 0.0
- Iou Vehicle-cartrailer: 0.0
- Iou Construction-building: 0.6806
- Iou Construction-door: 0.0
- Iou Construction-wall: 0.4268
- Iou Construction-fenceguardrail: 0.4087
- Iou Construction-bridge: 0.0
- Iou Construction-tunnel: nan
- Iou Construction-stairs: 0.0
- Iou Object-pole: 0.2241
- Iou Object-trafficsign: 0.0
- Iou Object-trafficlight: 0.0
- Iou Nature-vegetation: 0.8381
- Iou Nature-terrain: 0.7739
- Iou Sky: 0.9036
- Iou Void-ground: 0.0
- Iou Void-dynamic: 0.2473
- Iou Void-static: 0.3211
- Iou Void-unclear: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 3407
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: polynomial
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Flat-road | Accuracy Flat-sidewalk | Accuracy Flat-crosswalk | Accuracy Flat-cyclinglane | Accuracy Flat-parkingdriveway | Accuracy Flat-railtrack | Accuracy Flat-curb | Accuracy Human-person | Accuracy Human-rider | Accuracy Vehicle-car | Accuracy Vehicle-truck | Accuracy Vehicle-bus | Accuracy Vehicle-tramtrain | Accuracy Vehicle-motorcycle | Accuracy Vehicle-bicycle | Accuracy Vehicle-caravan | Accuracy Vehicle-cartrailer | Accuracy Construction-building | Accuracy Construction-door | Accuracy Construction-wall | Accuracy Construction-fenceguardrail | Accuracy Construction-bridge | Accuracy Construction-tunnel | Accuracy Construction-stairs | Accuracy Object-pole | Accuracy Object-trafficsign | Accuracy Object-trafficlight | Accuracy Nature-vegetation | Accuracy Nature-terrain | Accuracy Sky | Accuracy Void-ground | Accuracy Void-dynamic | Accuracy Void-static | Accuracy Void-unclear | Iou Unlabeled | Iou Flat-road | Iou Flat-sidewalk | Iou Flat-crosswalk | Iou Flat-cyclinglane | Iou Flat-parkingdriveway | Iou Flat-railtrack | Iou Flat-curb | Iou Human-person | Iou Human-rider | Iou Vehicle-car | Iou Vehicle-truck | Iou Vehicle-bus | Iou Vehicle-tramtrain | Iou Vehicle-motorcycle | Iou Vehicle-bicycle | Iou Vehicle-caravan | Iou Vehicle-cartrailer | Iou Construction-building | Iou Construction-door | Iou Construction-wall | Iou Construction-fenceguardrail | Iou Construction-bridge | Iou Construction-tunnel | Iou Construction-stairs | Iou Object-pole | Iou Object-trafficsign | Iou Object-trafficlight | Iou Nature-vegetation | Iou Nature-terrain | Iou Sky | Iou Void-ground | Iou Void-dynamic | Iou Void-static | Iou Void-unclear |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:------------------:|:----------------------:|:-----------------------:|:-------------------------:|:-----------------------------:|:-----------------------:|:------------------:|:---------------------:|:--------------------:|:--------------------:|:----------------------:|:--------------------:|:--------------------------:|:---------------------------:|:------------------------:|:------------------------:|:---------------------------:|:------------------------------:|:--------------------------:|:--------------------------:|:------------------------------------:|:----------------------------:|:----------------------------:|:----------------------------:|:--------------------:|:---------------------------:|:----------------------------:|:--------------------------:|:-----------------------:|:------------:|:--------------------:|:---------------------:|:--------------------:|:---------------------:|:-------------:|:-------------:|:-----------------:|:------------------:|:--------------------:|:------------------------:|:------------------:|:-------------:|:----------------:|:---------------:|:---------------:|:-----------------:|:---------------:|:---------------------:|:----------------------:|:-------------------:|:-------------------:|:----------------------:|:-------------------------:|:---------------------:|:---------------------:|:-------------------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:---------------:|:----------------------:|:-----------------------:|:---------------------:|:------------------:|:-------:|:---------------:|:----------------:|:---------------:|:----------------:|
| 2.6262 | 1.0 | 107 | 1.8030 | 0.1150 | 0.1631 | 0.6678 | nan | 0.4682 | 0.9266 | 0.0 | 0.1510 | 0.0020 | nan | 0.0000 | 0.0 | 0.0 | 0.8961 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7523 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.0 | 0.0 | 0.9321 | 0.7060 | 0.3833 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.3811 | 0.6887 | 0.0 | 0.1495 | 0.0020 | nan | 0.0000 | 0.0 | 0.0 | 0.4861 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4729 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.0 | 0.0 | 0.6066 | 0.5402 | 0.3543 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.8747 | 2.0 | 214 | 1.5625 | 0.1290 | 0.1774 | 0.6759 | nan | 0.5144 | 0.9502 | 0.0 | 0.2435 | 0.0107 | nan | 0.0000 | 0.0 | 0.0 | 0.8290 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8706 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.6522 | 0.8882 | 0.7181 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.4081 | 0.6930 | 0.0 | 0.2399 | 0.0106 | nan | 0.0000 | 0.0 | 0.0 | 0.6241 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5101 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.5405 | 0.4174 | 0.6840 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4981 | 3.0 | 321 | 1.2528 | 0.1500 | 0.2000 | 0.7230 | nan | 0.7317 | 0.9017 | 0.0 | 0.5385 | 0.0352 | nan | 0.0 | 0.0 | 0.0 | 0.9288 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7741 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.0 | 0.0 | 0.8453 | 0.8460 | 0.7985 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.4742 | 0.7460 | 0.0 | 0.4950 | 0.0322 | nan | 0.0 | 0.0 | 0.0 | 0.5423 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5264 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.0 | 0.0 | 0.6792 | 0.5807 | 0.7242 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.3616 | 4.0 | 428 | 1.1121 | 0.1538 | 0.2064 | 0.7329 | nan | 0.6492 | 0.9195 | 0.0 | 0.6889 | 0.0578 | nan | 0.0 | 0.0 | 0.0 | 0.9435 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7840 | 0.0 | 0.0000 | 0.0 | 0.0 | nan | 0.0 | 0.0008 | 0.0 | 0.0 | 0.8478 | 0.8760 | 0.8387 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.4697 | 0.7686 | 0.0 | 0.5062 | 0.0538 | nan | 0.0 | 0.0 | 0.0 | 0.5448 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5363 | 0.0 | 0.0000 | 0.0 | 0.0 | nan | 0.0 | 0.0008 | 0.0 | 0.0 | 0.6882 | 0.5835 | 0.7687 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.2402 | 5.0 | 535 | 0.9807 | 0.1617 | 0.2044 | 0.7432 | nan | 0.6175 | 0.9496 | 0.0 | 0.6074 | 0.0402 | nan | 0.0 | 0.0 | 0.0 | 0.9149 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.9186 | 0.0 | 0.0000 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.8532 | 0.7998 | 0.8388 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5019 | 0.7449 | 0.0 | 0.5470 | 0.0378 | nan | 0.0 | 0.0 | 0.0 | 0.6501 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5055 | 0.0 | 0.0000 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7337 | 0.6708 | 0.7837 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.1318 | 6.0 | 642 | 1.0385 | 0.1477 | 0.2019 | 0.7068 | nan | 0.4984 | 0.9470 | 0.0 | 0.7702 | 0.0691 | nan | 0.0 | 0.0 | 0.0 | 0.9543 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8840 | 0.0 | 0.0178 | 0.0 | 0.0 | nan | 0.0 | 0.0005 | 0.0 | 0.0 | 0.6212 | 0.8926 | 0.8047 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.4191 | 0.7292 | 0.0 | 0.5835 | 0.0601 | nan | 0.0 | 0.0 | 0.0 | 0.6061 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5407 | 0.0 | 0.0177 | 0.0 | 0.0 | nan | 0.0 | 0.0005 | 0.0 | 0.0 | 0.5770 | 0.4410 | 0.7513 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.0574 | 7.0 | 749 | 0.9046 | 0.1690 | 0.2127 | 0.7531 | nan | 0.6171 | 0.9520 | 0.0115 | 0.7032 | 0.0985 | nan | 0.0020 | 0.0 | 0.0 | 0.9373 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8779 | 0.0 | 0.0486 | 0.0 | 0.0 | nan | 0.0 | 0.0169 | 0.0 | 0.0 | 0.8670 | 0.8317 | 0.8423 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.4908 | 0.7535 | 0.0115 | 0.6164 | 0.0845 | nan | 0.0020 | 0.0 | 0.0 | 0.6419 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5622 | 0.0 | 0.0479 | 0.0 | 0.0 | nan | 0.0 | 0.0166 | 0.0 | 0.0 | 0.7327 | 0.6574 | 0.7922 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.9759 | 8.0 | 856 | 0.8881 | 0.1756 | 0.2189 | 0.7547 | nan | 0.5985 | 0.9566 | 0.1262 | 0.7070 | 0.0835 | nan | 0.0258 | 0.0 | 0.0 | 0.9402 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8847 | 0.0 | 0.1283 | 0.0001 | 0.0 | nan | 0.0 | 0.0597 | 0.0 | 0.0 | 0.8756 | 0.7936 | 0.8249 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5068 | 0.7552 | 0.1260 | 0.5858 | 0.0748 | nan | 0.0248 | 0.0 | 0.0 | 0.6569 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5553 | 0.0 | 0.1171 | 0.0001 | 0.0 | nan | 0.0 | 0.0558 | 0.0 | 0.0 | 0.7294 | 0.6463 | 0.7841 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.9478 | 9.0 | 963 | 0.8427 | 0.1792 | 0.2281 | 0.7627 | nan | 0.6444 | 0.9385 | 0.1576 | 0.8233 | 0.1084 | nan | 0.0637 | 0.0 | 0.0 | 0.9471 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.9072 | 0.0 | 0.1206 | 0.0010 | 0.0 | nan | 0.0 | 0.0478 | 0.0 | 0.0 | 0.8482 | 0.8364 | 0.8557 | 0.0 | 0.0 | 0.0001 | 0.0 | nan | 0.5283 | 0.7850 | 0.1503 | 0.5470 | 0.0989 | nan | 0.0594 | 0.0 | 0.0 | 0.6174 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5622 | 0.0 | 0.1114 | 0.0010 | 0.0 | nan | 0.0 | 0.0438 | 0.0 | 0.0 | 0.7500 | 0.6754 | 0.8027 | 0.0 | 0.0 | 0.0001 | 0.0 |
| 0.9357 | 10.0 | 1070 | 0.8261 | 0.1919 | 0.2394 | 0.7663 | nan | 0.7330 | 0.9060 | 0.2756 | 0.7705 | 0.0660 | nan | 0.1561 | 0.0018 | 0.0 | 0.9329 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0039 | 0.0 | 0.0 | 0.8534 | 0.0 | 0.2518 | 0.0002 | 0.0 | nan | 0.0 | 0.0967 | 0.0 | 0.0 | 0.9051 | 0.8362 | 0.8711 | 0.0 | 0.0 | 0.0002 | 0.0 | nan | 0.5551 | 0.7720 | 0.2581 | 0.6407 | 0.0623 | nan | 0.1264 | 0.0018 | 0.0 | 0.6682 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0039 | 0.0 | 0.0 | 0.5674 | 0.0 | 0.2233 | 0.0002 | 0.0 | nan | 0.0 | 0.0776 | 0.0 | 0.0 | 0.7281 | 0.6464 | 0.8107 | 0.0 | 0.0 | 0.0002 | 0.0 |
| 0.8472 | 11.0 | 1177 | 0.7976 | 0.2078 | 0.2573 | 0.7708 | nan | 0.6694 | 0.9321 | 0.6020 | 0.7892 | 0.0857 | nan | 0.1836 | 0.0318 | 0.0 | 0.9266 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8658 | 0.0 | 0.4413 | 0.0004 | 0.0 | nan | 0.0 | 0.1099 | 0.0 | 0.0 | 0.8505 | 0.8369 | 0.9076 | 0.0 | 0.0 | 0.0022 | 0.0 | nan | 0.5349 | 0.7639 | 0.5201 | 0.6244 | 0.0777 | nan | 0.1394 | 0.0313 | 0.0 | 0.6994 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5838 | 0.0 | 0.3358 | 0.0004 | 0.0 | nan | 0.0 | 0.0885 | 0.0 | 0.0 | 0.7588 | 0.6643 | 0.8232 | 0.0 | 0.0 | 0.0022 | 0.0 |
| 0.8613 | 12.0 | 1284 | 0.7624 | 0.2053 | 0.2580 | 0.7782 | nan | 0.7226 | 0.9281 | 0.1215 | 0.7176 | 0.1899 | nan | 0.3386 | 0.0810 | 0.0 | 0.9500 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0216 | 0.0 | 0.0 | 0.8083 | 0.0 | 0.5341 | 0.0030 | 0.0 | nan | 0.0 | 0.2061 | 0.0 | 0.0 | 0.8630 | 0.8930 | 0.8579 | 0.0 | 0.0 | 0.0188 | 0.0 | nan | 0.5747 | 0.7922 | 0.1206 | 0.6327 | 0.1671 | nan | 0.2062 | 0.0747 | 0.0 | 0.6546 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0214 | 0.0 | 0.0 | 0.6156 | 0.0 | 0.3586 | 0.0030 | 0.0 | nan | 0.0 | 0.1199 | 0.0 | 0.0 | 0.7451 | 0.6452 | 0.8198 | 0.0 | 0.0 | 0.0183 | 0.0 |
| 0.7847 | 13.0 | 1391 | 0.7713 | 0.2170 | 0.2694 | 0.7725 | nan | 0.6374 | 0.9509 | 0.6615 | 0.5478 | 0.1946 | nan | 0.3358 | 0.2063 | 0.0 | 0.9425 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0670 | 0.0 | 0.0 | 0.8683 | 0.0 | 0.4287 | 0.0321 | 0.0 | nan | 0.0 | 0.1353 | 0.0 | 0.0 | 0.8484 | 0.8816 | 0.8796 | 0.0 | 0.0 | 0.0039 | 0.0 | nan | 0.5472 | 0.7772 | 0.5301 | 0.4776 | 0.1689 | nan | 0.2091 | 0.1927 | 0.0 | 0.6819 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0665 | 0.0 | 0.0 | 0.6125 | 0.0 | 0.3472 | 0.0320 | 0.0 | nan | 0.0 | 0.1017 | 0.0 | 0.0 | 0.7452 | 0.6313 | 0.8206 | 0.0 | 0.0 | 0.0039 | 0.0 |
| 0.8041 | 14.0 | 1498 | 0.7236 | 0.2310 | 0.2877 | 0.7880 | nan | 0.6190 | 0.9472 | 0.5927 | 0.8157 | 0.1877 | nan | 0.3897 | 0.3524 | 0.0 | 0.9307 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1085 | 0.0 | 0.0 | 0.7989 | 0.0 | 0.5597 | 0.0130 | 0.0 | nan | 0.0 | 0.1964 | 0.0 | 0.0 | 0.9106 | 0.8640 | 0.9053 | 0.0 | 0.0 | 0.0136 | 0.0 | nan | 0.5518 | 0.7940 | 0.4508 | 0.6548 | 0.1720 | nan | 0.2188 | 0.3054 | 0.0 | 0.7232 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1064 | 0.0 | 0.0 | 0.6219 | 0.0 | 0.3678 | 0.0129 | 0.0 | nan | 0.0 | 0.1153 | 0.0 | 0.0 | 0.7713 | 0.6756 | 0.8352 | 0.0 | 0.0 | 0.0133 | 0.0 |
| 0.74 | 15.0 | 1605 | 0.7429 | 0.2275 | 0.2771 | 0.7791 | nan | 0.6285 | 0.9519 | 0.4719 | 0.6291 | 0.1591 | nan | 0.3397 | 0.4511 | 0.0 | 0.9334 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1813 | 0.0 | 0.0 | 0.8426 | 0.0 | 0.4337 | 0.0407 | 0.0 | nan | 0.0 | 0.1488 | 0.0 | 0.0 | 0.9095 | 0.8395 | 0.9037 | 0.0 | 0.0 | 0.0042 | 0.0 | nan | 0.5371 | 0.7690 | 0.4183 | 0.5169 | 0.1449 | nan | 0.2088 | 0.3547 | 0.0 | 0.7256 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1607 | 0.0 | 0.0 | 0.6329 | 0.0 | 0.3753 | 0.0401 | 0.0 | nan | 0.0 | 0.1064 | 0.0 | 0.0 | 0.7613 | 0.6850 | 0.8378 | 0.0 | 0.0 | 0.0042 | 0.0 |
| 0.7184 | 16.0 | 1712 | 0.6646 | 0.2439 | 0.2964 | 0.8029 | nan | 0.7431 | 0.9484 | 0.4972 | 0.8103 | 0.2001 | nan | 0.3258 | 0.4629 | 0.0 | 0.9401 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1821 | 0.0 | 0.0 | 0.8279 | 0.0 | 0.6112 | 0.0110 | 0.0 | nan | 0.0 | 0.2345 | 0.0 | 0.0 | 0.8917 | 0.8616 | 0.9063 | 0.0 | 0.0 | 0.0313 | 0.0 | nan | 0.6165 | 0.8048 | 0.4637 | 0.6925 | 0.1798 | nan | 0.2280 | 0.3680 | 0.0 | 0.7181 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1722 | 0.0 | 0.0 | 0.6472 | 0.0 | 0.3937 | 0.0109 | 0.0 | nan | 0.0 | 0.1451 | 0.0 | 0.0 | 0.7831 | 0.7086 | 0.8424 | 0.0 | 0.0 | 0.0293 | 0.0 |
| 0.7496 | 17.0 | 1819 | 0.7000 | 0.2385 | 0.2948 | 0.7852 | nan | 0.6870 | 0.9552 | 0.5478 | 0.8104 | 0.2046 | nan | 0.3326 | 0.5163 | 0.0 | 0.9477 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1966 | 0.0 | 0.0 | 0.8765 | 0.0 | 0.5327 | 0.0835 | 0.0 | nan | 0.0 | 0.2037 | 0.0 | 0.0 | 0.7682 | 0.8842 | 0.8746 | 0.0 | 0.0 | 0.0112 | 0.0 | nan | 0.6047 | 0.7973 | 0.4943 | 0.6625 | 0.1800 | nan | 0.2414 | 0.3971 | 0.0 | 0.7056 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1848 | 0.0 | 0.0 | 0.6271 | 0.0 | 0.4151 | 0.0821 | 0.0 | nan | 0.0 | 0.1296 | 0.0 | 0.0 | 0.7041 | 0.5693 | 0.8255 | 0.0 | 0.0 | 0.0103 | 0.0 |
| 0.6723 | 18.0 | 1926 | 0.6689 | 0.2487 | 0.3028 | 0.8022 | nan | 0.7140 | 0.9462 | 0.5937 | 0.8259 | 0.1957 | nan | 0.3702 | 0.5214 | 0.0 | 0.9200 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2293 | 0.0 | 0.0 | 0.8539 | 0.0 | 0.4865 | 0.1947 | 0.0 | nan | 0.0 | 0.1763 | 0.0 | 0.0 | 0.9082 | 0.8340 | 0.9089 | 0.0 | 0.0 | 0.0120 | 0.0 | nan | 0.5970 | 0.8087 | 0.4665 | 0.6640 | 0.1781 | nan | 0.2503 | 0.3889 | 0.0 | 0.7549 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2088 | 0.0 | 0.0 | 0.6258 | 0.0 | 0.3857 | 0.1777 | 0.0 | nan | 0.0 | 0.1182 | 0.0 | 0.0 | 0.7836 | 0.6927 | 0.8458 | 0.0 | 0.0 | 0.0116 | 0.0 |
| 0.6711 | 19.0 | 2033 | 0.7171 | 0.2410 | 0.2989 | 0.7914 | nan | 0.6550 | 0.9565 | 0.2743 | 0.8302 | 0.1406 | nan | 0.2934 | 0.6023 | 0.0 | 0.9349 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2585 | 0.0 | 0.0 | 0.8100 | 0.0 | 0.6473 | 0.1674 | 0.0 | nan | 0.0 | 0.2790 | 0.0 | 0.0 | 0.8745 | 0.8440 | 0.9106 | 0.0 | 0.0 | 0.0875 | 0.0 | nan | 0.5597 | 0.7818 | 0.2558 | 0.6593 | 0.1321 | nan | 0.1980 | 0.3907 | 0.0 | 0.7384 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2201 | 0.0 | 0.0 | 0.6550 | 0.0 | 0.4454 | 0.1541 | 0.0 | nan | 0.0 | 0.1459 | 0.0 | 0.0 | 0.7737 | 0.6716 | 0.8494 | 0.0 | 0.0 | 0.0804 | 0.0 |
| 0.6843 | 20.0 | 2140 | 0.6734 | 0.2490 | 0.3017 | 0.7990 | nan | 0.6756 | 0.9455 | 0.4783 | 0.7881 | 0.2300 | nan | 0.3708 | 0.5272 | 0.0 | 0.9268 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3415 | 0.0 | 0.0 | 0.8824 | 0.0 | 0.5309 | 0.1064 | 0.0 | nan | 0.0 | 0.1958 | 0.0 | 0.0 | 0.8984 | 0.8616 | 0.8828 | 0.0 | 0.0 | 0.0131 | 0.0 | nan | 0.5749 | 0.8016 | 0.4521 | 0.6371 | 0.2040 | nan | 0.2474 | 0.4068 | 0.0 | 0.7496 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2855 | 0.0 | 0.0 | 0.6350 | 0.0 | 0.3958 | 0.1030 | 0.0 | nan | 0.0 | 0.1338 | 0.0 | 0.0 | 0.7849 | 0.6948 | 0.8482 | 0.0 | 0.0 | 0.0127 | 0.0 |
| 0.6635 | 21.0 | 2247 | 0.6890 | 0.2588 | 0.3264 | 0.7921 | nan | 0.8326 | 0.8628 | 0.5588 | 0.8419 | 0.2679 | nan | 0.3243 | 0.6557 | 0.0 | 0.9472 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3776 | 0.0 | 0.0 | 0.7708 | 0.0 | 0.5574 | 0.1780 | 0.0 | nan | 0.0 | 0.3384 | 0.0 | 0.0 | 0.9255 | 0.8385 | 0.8964 | 0.0 | 0.0 | 0.2726 | 0.0 | nan | 0.5749 | 0.7876 | 0.4791 | 0.6054 | 0.2392 | nan | 0.1911 | 0.4287 | 0.0 | 0.7368 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2660 | 0.0 | 0.0 | 0.6472 | 0.0 | 0.3842 | 0.1686 | 0.0 | nan | 0.0 | 0.2159 | 0.0 | 0.0 | 0.7949 | 0.7123 | 0.8570 | 0.0 | 0.0 | 0.1917 | 0.0 |
| 0.6789 | 22.0 | 2354 | 0.6569 | 0.2518 | 0.3098 | 0.8048 | nan | 0.6906 | 0.9488 | 0.2921 | 0.8276 | 0.2066 | nan | 0.3660 | 0.7073 | 0.0 | 0.9234 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2937 | 0.0 | 0.0 | 0.8247 | 0.0 | 0.6251 | 0.1025 | 0.0 | nan | 0.0 | 0.2306 | 0.0 | 0.0 | 0.9155 | 0.8941 | 0.8789 | 0.0 | 0.0 | 0.1853 | 0.0 | nan | 0.5897 | 0.7929 | 0.2851 | 0.7088 | 0.1866 | nan | 0.2519 | 0.3796 | 0.0 | 0.7649 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2501 | 0.0 | 0.0 | 0.6571 | 0.0 | 0.4087 | 0.0996 | 0.0 | nan | 0.0 | 0.1627 | 0.0 | 0.0 | 0.7933 | 0.7197 | 0.8451 | 0.0 | 0.0 | 0.1610 | 0.0 |
| 0.6454 | 23.0 | 2461 | 0.6744 | 0.2605 | 0.3209 | 0.8028 | nan | 0.7072 | 0.9467 | 0.5200 | 0.8210 | 0.1523 | nan | 0.3109 | 0.6300 | 0.0 | 0.8984 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5537 | 0.0 | 0.0 | 0.8196 | 0.0 | 0.5994 | 0.2228 | 0.0 | nan | 0.0 | 0.3040 | 0.0 | 0.0 | 0.9410 | 0.7925 | 0.9118 | 0.0 | 0.0 | 0.1383 | 0.0 | nan | 0.5906 | 0.8004 | 0.4780 | 0.7242 | 0.1412 | nan | 0.2191 | 0.3835 | 0.0 | 0.7780 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3617 | 0.0 | 0.0 | 0.6282 | 0.0 | 0.3930 | 0.2002 | 0.0 | nan | 0.0 | 0.1673 | 0.0 | 0.0 | 0.7889 | 0.7080 | 0.8552 | 0.0 | 0.0 | 0.1194 | 0.0 |
| 0.6038 | 24.0 | 2568 | 0.6785 | 0.2519 | 0.3080 | 0.7982 | nan | 0.6396 | 0.9585 | 0.4631 | 0.8241 | 0.1844 | nan | 0.3288 | 0.6904 | 0.0 | 0.9496 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2838 | 0.0 | 0.0 | 0.8668 | 0.0 | 0.5447 | 0.1331 | 0.0 | nan | 0.0 | 0.1784 | 0.0 | 0.0 | 0.8765 | 0.8686 | 0.8889 | 0.0 | 0.0 | 0.1761 | 0.0 | nan | 0.5546 | 0.7909 | 0.4552 | 0.6615 | 0.1639 | nan | 0.2402 | 0.4092 | 0.0 | 0.7445 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2344 | 0.0 | 0.0 | 0.6467 | 0.0 | 0.3978 | 0.1283 | 0.0 | nan | 0.0 | 0.1413 | 0.0 | 0.0 | 0.7854 | 0.7015 | 0.8537 | 0.0 | 0.0 | 0.1505 | 0.0 |
| 0.5843 | 25.0 | 2675 | 0.6425 | 0.2625 | 0.3200 | 0.8138 | nan | 0.7555 | 0.9393 | 0.5554 | 0.8420 | 0.1744 | nan | 0.3761 | 0.6702 | 0.0 | 0.9562 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2860 | 0.0 | 0.0 | 0.8611 | 0.0 | 0.5607 | 0.1917 | 0.0 | nan | 0.0 | 0.1636 | 0.0 | 0.0 | 0.9036 | 0.8870 | 0.9333 | 0.0 | 0.0 | 0.1841 | 0.0 | nan | 0.6111 | 0.8171 | 0.5276 | 0.6873 | 0.1643 | nan | 0.2403 | 0.4126 | 0.0 | 0.7284 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2312 | 0.0 | 0.0 | 0.6629 | 0.0 | 0.4188 | 0.1843 | 0.0 | nan | 0.0 | 0.1315 | 0.0 | 0.0 | 0.8117 | 0.7419 | 0.8673 | 0.0 | 0.0 | 0.1604 | 0.0 |
| 0.6123 | 26.0 | 2782 | 0.6211 | 0.2684 | 0.3346 | 0.8108 | nan | 0.7087 | 0.9396 | 0.7111 | 0.8640 | 0.2795 | nan | 0.3913 | 0.6719 | 0.0 | 0.9391 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5103 | 0.0 | 0.0 | 0.8288 | 0.0 | 0.5326 | 0.1613 | 0.0 | nan | 0.0 | 0.2742 | 0.0 | 0.0 | 0.8939 | 0.9020 | 0.9356 | 0.0 | 0.0 | 0.1632 | 0.0 | nan | 0.6169 | 0.8093 | 0.5850 | 0.6558 | 0.2548 | nan | 0.2581 | 0.4340 | 0.0 | 0.7676 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3224 | 0.0 | 0.0 | 0.6583 | 0.0 | 0.3977 | 0.1580 | 0.0 | nan | 0.0 | 0.1779 | 0.0 | 0.0 | 0.7857 | 0.6936 | 0.8708 | 0.0 | 0.0 | 0.1444 | 0.0 |
| 0.6164 | 27.0 | 2889 | 0.6278 | 0.2692 | 0.3328 | 0.8106 | nan | 0.7301 | 0.9259 | 0.4546 | 0.8241 | 0.2859 | nan | 0.4266 | 0.7477 | 0.0 | 0.9314 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4932 | 0.0 | 0.0 | 0.8171 | 0.0 | 0.6465 | 0.2815 | 0.0 | nan | 0.0 | 0.2226 | 0.0 | 0.0 | 0.9394 | 0.8207 | 0.9307 | 0.0 | 0.0 | 0.1704 | 0.0 | nan | 0.5995 | 0.8124 | 0.4500 | 0.6896 | 0.2528 | nan | 0.2576 | 0.4422 | 0.0 | 0.7626 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3378 | 0.0 | 0.0 | 0.6537 | 0.0 | 0.4146 | 0.2460 | 0.0 | nan | 0.0 | 0.1643 | 0.0 | 0.0 | 0.7946 | 0.7199 | 0.8725 | 0.0 | 0.0 | 0.1457 | 0.0 |
| 0.5928 | 28.0 | 2996 | 0.6444 | 0.2560 | 0.3123 | 0.8032 | nan | 0.7225 | 0.9455 | 0.5826 | 0.7252 | 0.2540 | nan | 0.3636 | 0.6262 | 0.0 | 0.9521 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4389 | 0.0 | 0.0 | 0.8869 | 0.0 | 0.5355 | 0.2178 | 0.0 | nan | 0.0 | 0.1197 | 0.0 | 0.0 | 0.9124 | 0.7629 | 0.8912 | 0.0 | 0.0 | 0.0572 | 0.0 | nan | 0.6153 | 0.7979 | 0.4936 | 0.6398 | 0.2297 | nan | 0.2420 | 0.4290 | 0.0 | 0.7238 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2985 | 0.0 | 0.0 | 0.6188 | 0.0 | 0.3818 | 0.1974 | 0.0 | nan | 0.0 | 0.1012 | 0.0 | 0.0 | 0.8042 | 0.7110 | 0.8555 | 0.0 | 0.0 | 0.0540 | 0.0 |
| 0.6063 | 29.0 | 3103 | 0.6164 | 0.2670 | 0.3287 | 0.8098 | nan | 0.7636 | 0.9203 | 0.5298 | 0.8515 | 0.2476 | nan | 0.3554 | 0.6410 | 0.0 | 0.9332 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5947 | 0.0 | 0.0 | 0.8756 | 0.0 | 0.4557 | 0.1954 | 0.0 | nan | 0.0 | 0.1901 | 0.0 | 0.0 | 0.9170 | 0.8145 | 0.9374 | 0.0 | 0.0 | 0.2956 | 0.0 | nan | 0.6094 | 0.8098 | 0.4908 | 0.6464 | 0.2266 | nan | 0.2439 | 0.4621 | 0.0 | 0.7694 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3324 | 0.0 | 0.0 | 0.6464 | 0.0 | 0.3436 | 0.1845 | 0.0 | nan | 0.0 | 0.1395 | 0.0 | 0.0 | 0.8088 | 0.7230 | 0.8753 | 0.0 | 0.0 | 0.2309 | 0.0 |
| 0.5813 | 30.0 | 3210 | 0.6119 | 0.2729 | 0.3411 | 0.8105 | nan | 0.6853 | 0.9401 | 0.6264 | 0.8606 | 0.2528 | nan | 0.4579 | 0.5719 | 0.0 | 0.9117 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6665 | 0.0 | 0.0 | 0.7954 | 0.0 | 0.7055 | 0.1902 | 0.0 | nan | 0.0 | 0.2144 | 0.0 | 0.0 | 0.9307 | 0.8167 | 0.9228 | 0.0 | 0.0 | 0.3674 | 0.0 | nan | 0.6009 | 0.8068 | 0.5336 | 0.6663 | 0.2257 | nan | 0.2688 | 0.4321 | 0.0 | 0.7877 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3381 | 0.0 | 0.0 | 0.6567 | 0.0 | 0.3964 | 0.1782 | 0.0 | nan | 0.0 | 0.1573 | 0.0 | 0.0 | 0.8086 | 0.7333 | 0.8815 | 0.0 | 0.0 | 0.2611 | 0.0 |
| 0.5753 | 31.0 | 3317 | 0.6142 | 0.2692 | 0.3382 | 0.8144 | nan | 0.7397 | 0.9468 | 0.1591 | 0.8381 | 0.2301 | nan | 0.4275 | 0.6882 | 0.0 | 0.9288 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6825 | 0.0 | 0.0 | 0.8008 | 0.0 | 0.5789 | 0.3769 | 0.0 | nan | 0.0 | 0.3456 | 0.0 | 0.0 | 0.9055 | 0.8430 | 0.9436 | 0.0 | 0.0 | 0.3874 | 0.0 | nan | 0.6312 | 0.8057 | 0.1547 | 0.6974 | 0.2115 | nan | 0.2538 | 0.4668 | 0.0 | 0.7752 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3349 | 0.0 | 0.0 | 0.6634 | 0.0 | 0.4402 | 0.3100 | 0.0 | nan | 0.0 | 0.1839 | 0.0 | 0.0 | 0.8163 | 0.7392 | 0.8800 | 0.0 | 0.0 | 0.2488 | 0.0 |
| 0.5493 | 32.0 | 3424 | 0.6291 | 0.2726 | 0.3368 | 0.8115 | nan | 0.6741 | 0.9459 | 0.5440 | 0.8585 | 0.2403 | nan | 0.3326 | 0.6183 | 0.0 | 0.9290 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6009 | 0.0 | 0.0 | 0.8325 | 0.0 | 0.6410 | 0.3244 | 0.0 | nan | 0.0 | 0.2637 | 0.0 | 0.0 | 0.9228 | 0.8354 | 0.9401 | 0.0 | 0.0 | 0.2739 | 0.0 | nan | 0.5865 | 0.8026 | 0.5007 | 0.6228 | 0.2209 | nan | 0.2419 | 0.4515 | 0.0 | 0.7720 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3170 | 0.0 | 0.0 | 0.6730 | 0.0 | 0.4396 | 0.2779 | 0.0 | nan | 0.0 | 0.1709 | 0.0 | 0.0 | 0.8131 | 0.7372 | 0.8849 | 0.0 | 0.0 | 0.2112 | 0.0 |
| 0.572 | 33.0 | 3531 | 0.6095 | 0.2716 | 0.3339 | 0.8132 | nan | 0.7679 | 0.9161 | 0.4917 | 0.8535 | 0.2678 | nan | 0.3628 | 0.6298 | 0.0 | 0.9277 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5765 | 0.0 | 0.0 | 0.8717 | 0.0 | 0.5620 | 0.3133 | 0.0 | nan | 0.0 | 0.2658 | 0.0 | 0.0 | 0.9195 | 0.8374 | 0.9406 | 0.0 | 0.0 | 0.1813 | 0.0 | nan | 0.6048 | 0.8106 | 0.4751 | 0.6478 | 0.2494 | nan | 0.2390 | 0.4547 | 0.0 | 0.7739 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3014 | 0.0 | 0.0 | 0.6518 | 0.0 | 0.4310 | 0.2716 | 0.0 | nan | 0.0 | 0.1825 | 0.0 | 0.0 | 0.8197 | 0.7451 | 0.8871 | 0.0 | 0.0 | 0.1444 | 0.0 |
| 0.5659 | 34.0 | 3638 | 0.5995 | 0.2738 | 0.3390 | 0.8154 | nan | 0.7757 | 0.9180 | 0.4547 | 0.8543 | 0.2850 | nan | 0.4562 | 0.6517 | 0.0 | 0.9198 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6006 | 0.0 | 0.0 | 0.8648 | 0.0 | 0.5829 | 0.3047 | 0.0 | nan | 0.0 | 0.3037 | 0.0 | 0.0 | 0.9198 | 0.7874 | 0.9499 | 0.0 | 0.0 | 0.2188 | 0.0 | nan | 0.6275 | 0.8164 | 0.4468 | 0.6083 | 0.2625 | nan | 0.2727 | 0.4371 | 0.0 | 0.7805 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3290 | 0.0 | 0.0 | 0.6634 | 0.0 | 0.4357 | 0.2797 | 0.0 | nan | 0.0 | 0.2018 | 0.0 | 0.0 | 0.8151 | 0.7264 | 0.8810 | 0.0 | 0.0 | 0.1772 | 0.0 |
| 0.5575 | 35.0 | 3745 | 0.5962 | 0.2682 | 0.3311 | 0.8191 | nan | 0.7740 | 0.9371 | 0.2343 | 0.8567 | 0.2454 | nan | 0.3847 | 0.7019 | 0.0 | 0.9294 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6050 | 0.0 | 0.0 | 0.8382 | 0.0 | 0.5856 | 0.2783 | 0.0 | nan | 0.0 | 0.2568 | 0.0 | 0.0 | 0.9207 | 0.8675 | 0.9434 | 0.0 | 0.0 | 0.2367 | 0.0 | nan | 0.6334 | 0.8124 | 0.2298 | 0.6599 | 0.2273 | nan | 0.2663 | 0.4582 | 0.0 | 0.7808 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3277 | 0.0 | 0.0 | 0.6675 | 0.0 | 0.4400 | 0.2565 | 0.0 | nan | 0.0 | 0.1763 | 0.0 | 0.0 | 0.8190 | 0.7467 | 0.8883 | 0.0 | 0.0 | 0.1938 | 0.0 |
| 0.5385 | 36.0 | 3852 | 0.6016 | 0.2774 | 0.3421 | 0.8201 | nan | 0.7738 | 0.9329 | 0.5348 | 0.8502 | 0.2559 | nan | 0.4350 | 0.6830 | 0.0 | 0.9382 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5540 | 0.0 | 0.0 | 0.8252 | 0.0 | 0.5325 | 0.3373 | 0.0 | nan | 0.0 | 0.2487 | 0.0 | 0.0 | 0.9255 | 0.8700 | 0.9223 | 0.0 | 0.0 | 0.3264 | 0.0 | nan | 0.6329 | 0.8225 | 0.5163 | 0.6751 | 0.2349 | nan | 0.2794 | 0.4422 | 0.0 | 0.7515 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2940 | 0.0 | 0.0 | 0.6535 | 0.0 | 0.4284 | 0.2885 | 0.0 | nan | 0.0 | 0.1702 | 0.0 | 0.0 | 0.8221 | 0.7615 | 0.8847 | 0.0 | 0.0 | 0.2205 | 0.0 |
| 0.5514 | 37.0 | 3959 | 0.5875 | 0.2765 | 0.3385 | 0.8182 | nan | 0.7511 | 0.9456 | 0.7456 | 0.8527 | 0.1887 | nan | 0.3801 | 0.6007 | 0.0 | 0.9364 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5184 | 0.0 | 0.0 | 0.8594 | 0.0 | 0.5793 | 0.3400 | 0.0 | nan | 0.0 | 0.2364 | 0.0 | 0.0 | 0.9123 | 0.8157 | 0.9486 | 0.0 | 0.0005 | 0.2218 | 0.0 | nan | 0.6242 | 0.8136 | 0.6078 | 0.6635 | 0.1786 | nan | 0.2639 | 0.4478 | 0.0 | 0.7892 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2984 | 0.0 | 0.0 | 0.6629 | 0.0 | 0.4149 | 0.2929 | 0.0 | nan | 0.0 | 0.1679 | 0.0 | 0.0 | 0.8203 | 0.7379 | 0.8733 | 0.0 | 0.0005 | 0.1913 | 0.0 |
| 0.5646 | 38.0 | 4066 | 0.5851 | 0.2778 | 0.3461 | 0.8213 | nan | 0.7679 | 0.9433 | 0.2984 | 0.8542 | 0.2606 | nan | 0.4306 | 0.6745 | 0.0 | 0.9418 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6483 | 0.0 | 0.0 | 0.7969 | 0.0 | 0.6172 | 0.4199 | 0.0 | nan | 0.0 | 0.3028 | 0.0 | 0.0 | 0.9174 | 0.8176 | 0.9435 | 0.0 | 0.0 | 0.4390 | 0.0 | nan | 0.6480 | 0.8126 | 0.2820 | 0.6944 | 0.2379 | nan | 0.2893 | 0.4644 | 0.0 | 0.7608 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3057 | 0.0 | 0.0 | 0.6647 | 0.0 | 0.4662 | 0.3524 | 0.0 | nan | 0.0 | 0.2084 | 0.0 | 0.0 | 0.8224 | 0.7383 | 0.8829 | 0.0 | 0.0 | 0.2589 | 0.0 |
| 0.5159 | 39.0 | 4173 | 0.5972 | 0.2695 | 0.3261 | 0.8223 | nan | 0.8123 | 0.9386 | 0.1507 | 0.8524 | 0.2827 | nan | 0.3794 | 0.5939 | 0.0 | 0.9299 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5320 | 0.0 | 0.0 | 0.8163 | 0.0 | 0.5226 | 0.2399 | 0.0 | nan | 0.0 | 0.2565 | 0.0 | 0.0 | 0.9301 | 0.8507 | 0.9229 | 0.0 | 0.0343 | 0.3888 | 0.0 | nan | 0.6372 | 0.8230 | 0.1498 | 0.7153 | 0.2585 | nan | 0.2692 | 0.4527 | 0.0 | 0.7899 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3226 | 0.0 | 0.0 | 0.6551 | 0.0 | 0.4115 | 0.2302 | 0.0 | nan | 0.0 | 0.1851 | 0.0 | 0.0 | 0.8199 | 0.7428 | 0.8828 | 0.0 | 0.0338 | 0.2433 | 0.0 |
| 0.5437 | 40.0 | 4280 | 0.6458 | 0.2638 | 0.3251 | 0.8133 | nan | 0.7190 | 0.9530 | 0.1568 | 0.8506 | 0.2216 | nan | 0.3876 | 0.6859 | 0.0 | 0.9265 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5604 | 0.0 | 0.0 | 0.8024 | 0.0 | 0.5051 | 0.3324 | 0.0 | nan | 0.0 | 0.3104 | 0.0 | 0.0 | 0.9478 | 0.7896 | 0.9378 | 0.0 | 0.0034 | 0.3133 | 0.0 | nan | 0.6243 | 0.8034 | 0.1541 | 0.6898 | 0.2043 | nan | 0.2718 | 0.4106 | 0.0 | 0.7866 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3329 | 0.0 | 0.0 | 0.6448 | 0.0 | 0.3972 | 0.2899 | 0.0 | nan | 0.0 | 0.2032 | 0.0 | 0.0 | 0.8065 | 0.7190 | 0.8874 | 0.0 | 0.0033 | 0.2121 | 0.0 |
| 0.4939 | 41.0 | 4387 | 0.5953 | 0.2727 | 0.3344 | 0.8254 | nan | 0.7959 | 0.9461 | 0.1852 | 0.8392 | 0.2900 | nan | 0.4223 | 0.6795 | 0.0 | 0.9282 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6097 | 0.0 | 0.0 | 0.8611 | 0.0 | 0.4823 | 0.3292 | 0.0 | nan | 0.0 | 0.3134 | 0.0 | 0.0 | 0.9086 | 0.8670 | 0.9295 | 0.0 | 0.0145 | 0.2979 | 0.0 | nan | 0.6661 | 0.8216 | 0.1796 | 0.7319 | 0.2653 | nan | 0.2834 | 0.4419 | 0.0 | 0.7931 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3199 | 0.0 | 0.0 | 0.6450 | 0.0 | 0.3787 | 0.2909 | 0.0 | nan | 0.0 | 0.2106 | 0.0 | 0.0 | 0.8306 | 0.7667 | 0.8830 | 0.0 | 0.0140 | 0.2050 | 0.0 |
| 0.506 | 42.0 | 4494 | 0.5911 | 0.2708 | 0.3345 | 0.8198 | nan | 0.7873 | 0.9425 | 0.2025 | 0.8426 | 0.2298 | nan | 0.4588 | 0.6948 | 0.0 | 0.9262 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6237 | 0.0 | 0.0 | 0.8128 | 0.0 | 0.5311 | 0.3261 | 0.0 | nan | 0.0 | 0.2602 | 0.0 | 0.0 | 0.9298 | 0.8085 | 0.9480 | 0.0 | 0.0 | 0.3804 | 0.0 | nan | 0.6450 | 0.8105 | 0.1975 | 0.7161 | 0.2125 | nan | 0.2897 | 0.4452 | 0.0 | 0.7978 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3305 | 0.0 | 0.0 | 0.6536 | 0.0 | 0.3841 | 0.2815 | 0.0 | nan | 0.0 | 0.1810 | 0.0 | 0.0 | 0.8288 | 0.7434 | 0.8883 | 0.0 | 0.0 | 0.2588 | 0.0 |
| 0.509 | 43.0 | 4601 | 0.5833 | 0.2868 | 0.3498 | 0.8205 | nan | 0.7739 | 0.9379 | 0.5441 | 0.8511 | 0.2662 | nan | 0.4565 | 0.6319 | 0.0 | 0.9412 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5537 | 0.0 | 0.0 | 0.8590 | 0.0 | 0.4568 | 0.4407 | 0.0 | nan | 0.0 | 0.3014 | 0.0 | 0.0 | 0.8854 | 0.8643 | 0.9519 | 0.0 | 0.2006 | 0.2775 | 0.0 | nan | 0.6424 | 0.8171 | 0.5156 | 0.6938 | 0.2412 | nan | 0.2817 | 0.4313 | 0.0 | 0.7883 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3691 | 0.0 | 0.0 | 0.6494 | 0.0 | 0.3882 | 0.3200 | 0.0 | nan | 0.0 | 0.1917 | 0.0 | 0.0 | 0.8231 | 0.7458 | 0.8888 | 0.0 | 0.1764 | 0.2126 | 0.0 |
| 0.492 | 44.0 | 4708 | 0.5778 | 0.2894 | 0.3492 | 0.8279 | nan | 0.7784 | 0.9384 | 0.4728 | 0.8394 | 0.3692 | nan | 0.4454 | 0.6222 | 0.0 | 0.9326 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6000 | 0.0 | 0.0 | 0.8719 | 0.0 | 0.5171 | 0.3511 | 0.0 | nan | 0.0 | 0.2849 | 0.0 | 0.0 | 0.9081 | 0.8644 | 0.9380 | 0.0 | 0.1613 | 0.2784 | 0.0 | nan | 0.6517 | 0.8181 | 0.4574 | 0.7044 | 0.3303 | nan | 0.2926 | 0.4394 | 0.0 | 0.7927 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3677 | 0.0 | 0.0 | 0.6662 | 0.0 | 0.3997 | 0.2946 | 0.0 | nan | 0.0 | 0.1834 | 0.0 | 0.0 | 0.8346 | 0.7660 | 0.8938 | 0.0 | 0.1488 | 0.2201 | 0.0 |
| 0.4965 | 45.0 | 4815 | 0.5756 | 0.2907 | 0.3584 | 0.8280 | nan | 0.7832 | 0.9404 | 0.5044 | 0.8364 | 0.3479 | nan | 0.4872 | 0.6981 | 0.0 | 0.9539 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5872 | 0.0 | 0.0 | 0.8024 | 0.0 | 0.5819 | 0.4517 | 0.0 | nan | 0.0 | 0.3394 | 0.0 | 0.0 | 0.9036 | 0.8622 | 0.9505 | 0.0 | 0.0307 | 0.4088 | 0.0 | nan | 0.6637 | 0.8233 | 0.4888 | 0.7297 | 0.3113 | nan | 0.2962 | 0.4542 | 0.0 | 0.7496 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3476 | 0.0 | 0.0 | 0.6544 | 0.0 | 0.4196 | 0.3478 | 0.0 | nan | 0.0 | 0.2141 | 0.0 | 0.0 | 0.8337 | 0.7634 | 0.8923 | 0.0 | 0.0300 | 0.2841 | 0.0 |
| 0.5002 | 46.0 | 4922 | 0.5911 | 0.2876 | 0.3519 | 0.8241 | nan | 0.7498 | 0.9490 | 0.6335 | 0.8538 | 0.2570 | nan | 0.4053 | 0.6984 | 0.0 | 0.9179 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6869 | 0.0 | 0.0 | 0.8449 | 0.0 | 0.5818 | 0.3487 | 0.0 | nan | 0.0 | 0.3070 | 0.0 | 0.0 | 0.9241 | 0.8325 | 0.9416 | 0.0 | 0.0284 | 0.2998 | 0.0 | nan | 0.6405 | 0.8102 | 0.5883 | 0.7132 | 0.2382 | nan | 0.2768 | 0.4374 | 0.0 | 0.8038 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3840 | 0.0 | 0.0 | 0.6677 | 0.0 | 0.4052 | 0.2988 | 0.0 | nan | 0.0 | 0.2030 | 0.0 | 0.0 | 0.8288 | 0.7556 | 0.8939 | 0.0 | 0.0275 | 0.2317 | 0.0 |
| 0.4593 | 47.0 | 5029 | 0.5917 | 0.2826 | 0.3528 | 0.8187 | nan | 0.7586 | 0.9405 | 0.7273 | 0.8595 | 0.2065 | nan | 0.4141 | 0.6501 | 0.0 | 0.9297 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7871 | 0.0 | 0.0 | 0.8608 | 0.0 | 0.5563 | 0.4812 | 0.0 | nan | 0.0 | 0.2866 | 0.0 | 0.0 | 0.9116 | 0.7873 | 0.9549 | 0.0 | 0.0086 | 0.1702 | 0.0 | nan | 0.6428 | 0.8118 | 0.6553 | 0.6713 | 0.1926 | nan | 0.2664 | 0.4467 | 0.0 | 0.7903 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3458 | 0.0 | 0.0 | 0.6574 | 0.0 | 0.4026 | 0.3809 | 0.0 | nan | 0.0 | 0.1806 | 0.0 | 0.0 | 0.8257 | 0.7325 | 0.8904 | 0.0 | 0.0085 | 0.1427 | 0.0 |
| 0.4918 | 48.0 | 5136 | 0.5844 | 0.2867 | 0.3512 | 0.8249 | nan | 0.7452 | 0.9517 | 0.6479 | 0.8534 | 0.2971 | nan | 0.4271 | 0.6779 | 0.0 | 0.9265 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6756 | 0.0 | 0.0 | 0.8596 | 0.0 | 0.5569 | 0.3076 | 0.0 | nan | 0.0 | 0.2851 | 0.0 | 0.0 | 0.9064 | 0.8252 | 0.9511 | 0.0 | 0.0297 | 0.3125 | 0.0 | nan | 0.6504 | 0.8138 | 0.5866 | 0.6861 | 0.2693 | nan | 0.2809 | 0.4518 | 0.0 | 0.7985 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3514 | 0.0 | 0.0 | 0.6628 | 0.0 | 0.3947 | 0.2834 | 0.0 | nan | 0.0 | 0.1980 | 0.0 | 0.0 | 0.8343 | 0.7573 | 0.8914 | 0.0 | 0.0293 | 0.2353 | 0.0 |
| 0.4964 | 49.0 | 5243 | 0.6075 | 0.2763 | 0.3394 | 0.8260 | nan | 0.8158 | 0.9484 | 0.1481 | 0.8338 | 0.2194 | nan | 0.4162 | 0.6461 | 0.0 | 0.9305 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6198 | 0.0 | 0.0 | 0.8643 | 0.0 | 0.6049 | 0.5862 | 0.0 | nan | 0.0 | 0.3021 | 0.0 | 0.0 | 0.8990 | 0.8118 | 0.9415 | 0.0 | 0.0430 | 0.2289 | 0.0 | nan | 0.6691 | 0.8134 | 0.1461 | 0.7553 | 0.2065 | nan | 0.2807 | 0.4379 | 0.0 | 0.7924 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3316 | 0.0 | 0.0 | 0.6698 | 0.0 | 0.4370 | 0.4026 | 0.0 | nan | 0.0 | 0.2042 | 0.0 | 0.0 | 0.8254 | 0.7464 | 0.8863 | 0.0 | 0.0412 | 0.1961 | 0.0 |
| 0.5269 | 50.0 | 5350 | 0.5908 | 0.2872 | 0.3517 | 0.8275 | nan | 0.7844 | 0.9384 | 0.4184 | 0.8534 | 0.2576 | nan | 0.4853 | 0.6334 | 0.0 | 0.9291 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7341 | 0.0 | 0.0 | 0.8757 | 0.0 | 0.5056 | 0.4892 | 0.0 | nan | 0.0 | 0.3070 | 0.0 | 0.0 | 0.9109 | 0.8654 | 0.9529 | 0.0 | 0.0719 | 0.2414 | 0.0 | nan | 0.6483 | 0.8192 | 0.4170 | 0.6947 | 0.2399 | nan | 0.2972 | 0.4607 | 0.0 | 0.7964 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3577 | 0.0 | 0.0 | 0.6746 | 0.0 | 0.4162 | 0.3866 | 0.0 | nan | 0.0 | 0.2110 | 0.0 | 0.0 | 0.8339 | 0.7680 | 0.8941 | 0.0 | 0.0691 | 0.2074 | 0.0 |
| 0.4802 | 51.0 | 5457 | 0.5712 | 0.2837 | 0.3464 | 0.8255 | nan | 0.8403 | 0.9234 | 0.5830 | 0.8358 | 0.3140 | nan | 0.4449 | 0.6503 | 0.0 | 0.9308 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7235 | 0.0 | 0.0 | 0.8929 | 0.0 | 0.5109 | 0.3872 | 0.0 | nan | 0.0 | 0.2343 | 0.0 | 0.0 | 0.9138 | 0.7883 | 0.9594 | 0.0 | 0.0271 | 0.1250 | 0.0 | nan | 0.6763 | 0.8123 | 0.5638 | 0.6999 | 0.2848 | nan | 0.2960 | 0.4551 | 0.0 | 0.7899 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3364 | 0.0 | 0.0 | 0.6579 | 0.0 | 0.4071 | 0.3440 | 0.0 | nan | 0.0 | 0.1751 | 0.0 | 0.0 | 0.8265 | 0.7311 | 0.8846 | 0.0 | 0.0266 | 0.1127 | 0.0 |
| 0.4695 | 52.0 | 5564 | 0.5993 | 0.2815 | 0.3477 | 0.8202 | nan | 0.7433 | 0.9466 | 0.3654 | 0.8687 | 0.2254 | nan | 0.4466 | 0.6632 | 0.0 | 0.9326 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7149 | 0.0 | 0.0 | 0.8300 | 0.0 | 0.5175 | 0.4682 | 0.0 | nan | 0.0 | 0.3419 | 0.0 | 0.0 | 0.9250 | 0.8074 | 0.9329 | 0.0 | 0.0434 | 0.3532 | 0.0 | nan | 0.6338 | 0.8101 | 0.3587 | 0.6763 | 0.2116 | nan | 0.2961 | 0.4354 | 0.0 | 0.7837 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3688 | 0.0 | 0.0 | 0.6543 | 0.0 | 0.4152 | 0.3928 | 0.0 | nan | 0.0 | 0.2239 | 0.0 | 0.0 | 0.8275 | 0.7419 | 0.8949 | 0.0 | 0.0419 | 0.2411 | 0.0 |
| 0.4922 | 53.0 | 5671 | 0.6016 | 0.2920 | 0.3666 | 0.8225 | nan | 0.7667 | 0.9288 | 0.6087 | 0.8794 | 0.3013 | nan | 0.4562 | 0.6185 | 0.0 | 0.9306 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7977 | 0.0 | 0.0 | 0.8122 | 0.0 | 0.5192 | 0.5661 | 0.0 | nan | 0.0 | 0.3506 | 0.0 | 0.0 | 0.8941 | 0.9025 | 0.9439 | 0.0 | 0.0648 | 0.3892 | 0.0 | nan | 0.6348 | 0.8233 | 0.5651 | 0.6433 | 0.2803 | nan | 0.3000 | 0.4514 | 0.0 | 0.7950 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3560 | 0.0 | 0.0 | 0.6593 | 0.0 | 0.4181 | 0.4241 | 0.0 | nan | 0.0 | 0.2173 | 0.0 | 0.0 | 0.8134 | 0.7252 | 0.8969 | 0.0 | 0.0619 | 0.2772 | 0.0 |
| 0.4709 | 54.0 | 5778 | 0.5587 | 0.2946 | 0.3657 | 0.8311 | nan | 0.7967 | 0.9329 | 0.4847 | 0.8545 | 0.3175 | nan | 0.4448 | 0.6138 | 0.0 | 0.9310 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8397 | 0.0 | 0.0 | 0.8578 | 0.0 | 0.5495 | 0.5639 | 0.0 | nan | 0.0 | 0.3570 | 0.0 | 0.0 | 0.9111 | 0.8728 | 0.9307 | 0.0 | 0.0570 | 0.3867 | 0.0 | nan | 0.6590 | 0.8228 | 0.4692 | 0.6922 | 0.2950 | nan | 0.2932 | 0.4459 | 0.0 | 0.8035 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3522 | 0.0 | 0.0 | 0.6746 | 0.0 | 0.4287 | 0.4372 | 0.0 | nan | 0.0 | 0.2235 | 0.0 | 0.0 | 0.8385 | 0.7684 | 0.8894 | 0.0 | 0.0516 | 0.2812 | 0.0 |
| 0.4817 | 55.0 | 5885 | 0.5667 | 0.2937 | 0.3562 | 0.8281 | nan | 0.7432 | 0.9461 | 0.5829 | 0.8569 | 0.2769 | nan | 0.4648 | 0.6192 | 0.0 | 0.9399 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6915 | 0.0 | 0.0 | 0.8736 | 0.0 | 0.5043 | 0.3381 | 0.0 | nan | 0.0 | 0.2935 | 0.0 | 0.0 | 0.9209 | 0.8639 | 0.9472 | 0.0 | 0.1859 | 0.3488 | 0.0 | nan | 0.6405 | 0.8191 | 0.5462 | 0.6840 | 0.2586 | nan | 0.2955 | 0.4549 | 0.0 | 0.8026 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3554 | 0.0 | 0.0 | 0.6683 | 0.0 | 0.4009 | 0.3136 | 0.0 | nan | 0.0 | 0.2126 | 0.0 | 0.0 | 0.8331 | 0.7734 | 0.9009 | 0.0 | 0.1641 | 0.2759 | 0.0 |
| 0.4595 | 56.0 | 5992 | 0.5874 | 0.2862 | 0.3506 | 0.8218 | nan | 0.7450 | 0.9434 | 0.5648 | 0.8383 | 0.3200 | nan | 0.3914 | 0.6522 | 0.0 | 0.9315 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7153 | 0.0 | 0.0 | 0.8812 | 0.0 | 0.5688 | 0.4178 | 0.0 | nan | 0.0 | 0.2323 | 0.0 | 0.0 | 0.8861 | 0.8754 | 0.9452 | 0.0 | 0.1441 | 0.1669 | 0.0 | nan | 0.6567 | 0.8024 | 0.5299 | 0.6787 | 0.2926 | nan | 0.2783 | 0.4446 | 0.0 | 0.8096 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3941 | 0.0 | 0.0 | 0.6572 | 0.0 | 0.3749 | 0.3000 | 0.0 | nan | 0.0 | 0.1759 | 0.0 | 0.0 | 0.8286 | 0.7613 | 0.8968 | 0.0 | 0.1304 | 0.1454 | 0.0 |
| 0.4972 | 57.0 | 6099 | 0.5801 | 0.2977 | 0.3694 | 0.8266 | nan | 0.7642 | 0.9442 | 0.6982 | 0.8707 | 0.2118 | nan | 0.3879 | 0.6958 | 0.0 | 0.9321 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8075 | 0.0 | 0.0 | 0.8488 | 0.0 | 0.5833 | 0.5695 | 0.0 | nan | 0.0 | 0.3265 | 0.0 | 0.0 | 0.9050 | 0.8626 | 0.9449 | 0.0 | 0.1085 | 0.3580 | 0.0 | nan | 0.6488 | 0.8123 | 0.6271 | 0.6730 | 0.1990 | nan | 0.2857 | 0.4630 | 0.0 | 0.8064 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3915 | 0.0 | 0.0 | 0.6675 | 0.0 | 0.4207 | 0.4152 | 0.0 | nan | 0.0 | 0.2344 | 0.0 | 0.0 | 0.8336 | 0.7738 | 0.8985 | 0.0 | 0.0953 | 0.2798 | 0.0 |
| 0.4297 | 58.0 | 6206 | 0.5944 | 0.2996 | 0.3705 | 0.8216 | nan | 0.7302 | 0.9322 | 0.6533 | 0.8923 | 0.2638 | nan | 0.4137 | 0.6574 | 0.0 | 0.9288 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7123 | 0.0 | 0.0 | 0.8174 | 0.0 | 0.5952 | 0.5885 | 0.0 | nan | 0.0 | 0.3330 | 0.0 | 0.0 | 0.9268 | 0.8476 | 0.9462 | 0.0 | 0.2844 | 0.3339 | 0.0 | nan | 0.6223 | 0.8119 | 0.6034 | 0.5998 | 0.2447 | nan | 0.2997 | 0.4546 | 0.0 | 0.8171 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4156 | 0.0 | 0.0 | 0.6598 | 0.0 | 0.4330 | 0.4211 | 0.0 | nan | 0.0 | 0.2281 | 0.0 | 0.0 | 0.8336 | 0.7715 | 0.9005 | 0.0 | 0.2255 | 0.2440 | 0.0 |
| 0.4257 | 59.0 | 6313 | 0.6031 | 0.2902 | 0.3587 | 0.8230 | nan | 0.7653 | 0.9239 | 0.6527 | 0.8916 | 0.2514 | nan | 0.4532 | 0.6997 | 0.0 | 0.9376 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6646 | 0.0 | 0.0 | 0.8832 | 0.0 | 0.5420 | 0.4346 | 0.0 | nan | 0.0 | 0.3438 | 0.0 | 0.0 | 0.9162 | 0.8358 | 0.9575 | 0.0 | 0.1414 | 0.1839 | 0.0 | nan | 0.6284 | 0.8186 | 0.6079 | 0.6227 | 0.2397 | nan | 0.2999 | 0.4397 | 0.0 | 0.8057 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3776 | 0.0 | 0.0 | 0.6643 | 0.0 | 0.4166 | 0.3533 | 0.0 | nan | 0.0 | 0.2243 | 0.0 | 0.0 | 0.8395 | 0.7679 | 0.8974 | 0.0 | 0.1223 | 0.1599 | 0.0 |
| 0.4296 | 60.0 | 6420 | 0.5946 | 0.2944 | 0.3624 | 0.8258 | nan | 0.7561 | 0.9474 | 0.6031 | 0.8511 | 0.2484 | nan | 0.3636 | 0.7135 | 0.0 | 0.9342 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7552 | 0.0 | 0.0 | 0.8357 | 0.0 | 0.5080 | 0.4909 | 0.0 | nan | 0.0 | 0.3729 | 0.0 | 0.0 | 0.9267 | 0.8464 | 0.9518 | 0.0 | 0.1229 | 0.3694 | 0.0 | nan | 0.6454 | 0.8089 | 0.5664 | 0.6767 | 0.2304 | nan | 0.2676 | 0.4330 | 0.0 | 0.8101 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4132 | 0.0 | 0.0 | 0.6672 | 0.0 | 0.4207 | 0.3790 | 0.0 | nan | 0.0 | 0.2213 | 0.0 | 0.0 | 0.8356 | 0.7718 | 0.9003 | 0.0 | 0.1051 | 0.2681 | 0.0 |
| 0.4466 | 61.0 | 6527 | 0.5980 | 0.3001 | 0.3642 | 0.8282 | nan | 0.7422 | 0.9441 | 0.6934 | 0.8674 | 0.2867 | nan | 0.4244 | 0.6455 | 0.0 | 0.9306 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6763 | 0.0 | 0.0 | 0.8719 | 0.0 | 0.5553 | 0.4299 | 0.0 | nan | 0.0 | 0.2880 | 0.0 | 0.0 | 0.9100 | 0.8833 | 0.9525 | 0.0 | 0.2727 | 0.2809 | 0.0 | nan | 0.6374 | 0.8179 | 0.6350 | 0.6721 | 0.2686 | nan | 0.2882 | 0.4531 | 0.0 | 0.8191 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4176 | 0.0 | 0.0 | 0.6744 | 0.0 | 0.4024 | 0.3371 | 0.0 | nan | 0.0 | 0.1962 | 0.0 | 0.0 | 0.8345 | 0.7681 | 0.9003 | 0.0 | 0.2299 | 0.2500 | 0.0 |
| 0.4241 | 62.0 | 6634 | 0.5839 | 0.2995 | 0.3691 | 0.8287 | nan | 0.7865 | 0.9340 | 0.6469 | 0.8640 | 0.2307 | nan | 0.4864 | 0.6815 | 0.0 | 0.9352 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7356 | 0.0 | 0.0 | 0.8697 | 0.0 | 0.5722 | 0.5777 | 0.0 | nan | 0.0 | 0.3104 | 0.0 | 0.0 | 0.8992 | 0.8627 | 0.9563 | 0.0 | 0.1362 | 0.3268 | 0.0 | nan | 0.6410 | 0.8222 | 0.6152 | 0.6686 | 0.2212 | nan | 0.3016 | 0.4654 | 0.0 | 0.8115 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3924 | 0.0 | 0.0 | 0.6780 | 0.0 | 0.4340 | 0.4100 | 0.0 | nan | 0.0 | 0.2163 | 0.0 | 0.0 | 0.8374 | 0.7735 | 0.8996 | 0.0 | 0.1217 | 0.2751 | 0.0 |
| 0.4301 | 63.0 | 6741 | 0.5800 | 0.2924 | 0.3520 | 0.8283 | nan | 0.7768 | 0.9481 | 0.5269 | 0.8604 | 0.2304 | nan | 0.4541 | 0.5774 | 0.0 | 0.9296 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6931 | 0.0 | 0.0 | 0.8854 | 0.0 | 0.5592 | 0.3688 | 0.0 | nan | 0.0 | 0.2901 | 0.0 | 0.0 | 0.9181 | 0.8332 | 0.9422 | 0.0 | 0.2659 | 0.2051 | 0.0 | nan | 0.6594 | 0.8162 | 0.5133 | 0.7126 | 0.2168 | nan | 0.2955 | 0.4311 | 0.0 | 0.8130 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3954 | 0.0 | 0.0 | 0.6715 | 0.0 | 0.3933 | 0.3227 | 0.0 | nan | 0.0 | 0.2063 | 0.0 | 0.0 | 0.8385 | 0.7662 | 0.9011 | 0.0 | 0.2251 | 0.1793 | 0.0 |
| 0.4282 | 64.0 | 6848 | 0.5546 | 0.2960 | 0.3590 | 0.8307 | nan | 0.7868 | 0.9365 | 0.6291 | 0.8655 | 0.2797 | nan | 0.4857 | 0.6495 | 0.0 | 0.9440 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6531 | 0.0 | 0.0 | 0.8811 | 0.0 | 0.5376 | 0.4277 | 0.0 | nan | 0.0 | 0.2711 | 0.0 | 0.0 | 0.9272 | 0.8346 | 0.9506 | 0.0 | 0.2383 | 0.1889 | 0.0 | nan | 0.6604 | 0.8267 | 0.5860 | 0.6907 | 0.2597 | nan | 0.3043 | 0.4349 | 0.0 | 0.8033 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3992 | 0.0 | 0.0 | 0.6664 | 0.0 | 0.4137 | 0.3631 | 0.0 | nan | 0.0 | 0.1893 | 0.0 | 0.0 | 0.8362 | 0.7675 | 0.9011 | 0.0 | 0.1973 | 0.1722 | 0.0 |
| 0.4217 | 65.0 | 6955 | 0.5678 | 0.3011 | 0.3648 | 0.8301 | nan | 0.7971 | 0.9362 | 0.6074 | 0.8616 | 0.2837 | nan | 0.4557 | 0.6452 | 0.0 | 0.9430 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6666 | 0.0 | 0.0 | 0.8609 | 0.0 | 0.5462 | 0.4516 | 0.0 | nan | 0.0 | 0.3334 | 0.0 | 0.0 | 0.9182 | 0.8233 | 0.9599 | 0.0 | 0.2847 | 0.2985 | 0.0 | nan | 0.6547 | 0.8215 | 0.5730 | 0.7057 | 0.2655 | nan | 0.2969 | 0.4596 | 0.0 | 0.7902 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3813 | 0.0 | 0.0 | 0.6660 | 0.0 | 0.4061 | 0.3796 | 0.0 | nan | 0.0 | 0.2260 | 0.0 | 0.0 | 0.8349 | 0.7645 | 0.8985 | 0.0 | 0.2453 | 0.2659 | 0.0 |
| 0.4464 | 66.0 | 7062 | 0.5729 | 0.3010 | 0.3652 | 0.8313 | nan | 0.7799 | 0.9313 | 0.5977 | 0.8700 | 0.3423 | nan | 0.4580 | 0.6002 | 0.0 | 0.9398 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5995 | 0.0 | 0.0 | 0.8886 | 0.0 | 0.5401 | 0.5306 | 0.0 | nan | 0.0 | 0.3340 | 0.0 | 0.0 | 0.9036 | 0.8522 | 0.9591 | 0.0 | 0.2789 | 0.2821 | 0.0 | nan | 0.6616 | 0.8237 | 0.5684 | 0.6686 | 0.3166 | nan | 0.3077 | 0.4360 | 0.0 | 0.8019 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3915 | 0.0 | 0.0 | 0.6702 | 0.0 | 0.4127 | 0.3736 | 0.0 | nan | 0.0 | 0.2210 | 0.0 | 0.0 | 0.8325 | 0.7774 | 0.8983 | 0.0 | 0.2219 | 0.2476 | 0.0 |
| 0.4301 | 67.0 | 7169 | 0.5685 | 0.3014 | 0.3645 | 0.8338 | nan | 0.7739 | 0.9469 | 0.6631 | 0.8587 | 0.3149 | nan | 0.4595 | 0.6357 | 0.0 | 0.9363 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6059 | 0.0 | 0.0 | 0.8867 | 0.0 | 0.5642 | 0.5072 | 0.0 | nan | 0.0 | 0.2801 | 0.0 | 0.0 | 0.9005 | 0.8686 | 0.9538 | 0.0 | 0.2690 | 0.2390 | 0.0 | nan | 0.6602 | 0.8274 | 0.6195 | 0.7196 | 0.2892 | nan | 0.3109 | 0.4324 | 0.0 | 0.8114 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3900 | 0.0 | 0.0 | 0.6722 | 0.0 | 0.4242 | 0.3639 | 0.0 | nan | 0.0 | 0.2029 | 0.0 | 0.0 | 0.8328 | 0.7705 | 0.8975 | 0.0 | 0.2093 | 0.2123 | 0.0 |
| 0.4372 | 68.0 | 7276 | 0.5774 | 0.3001 | 0.3620 | 0.8331 | nan | 0.8014 | 0.9429 | 0.5934 | 0.8543 | 0.2700 | nan | 0.4604 | 0.6515 | 0.0 | 0.9315 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7000 | 0.0 | 0.0 | 0.8804 | 0.0 | 0.5757 | 0.4549 | 0.0 | nan | 0.0 | 0.2809 | 0.0 | 0.0 | 0.9107 | 0.8576 | 0.9436 | 0.0 | 0.2058 | 0.2682 | 0.0 | nan | 0.6612 | 0.8234 | 0.5769 | 0.7374 | 0.2506 | nan | 0.2950 | 0.4312 | 0.0 | 0.8122 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4148 | 0.0 | 0.0 | 0.6733 | 0.0 | 0.4192 | 0.3731 | 0.0 | nan | 0.0 | 0.2016 | 0.0 | 0.0 | 0.8396 | 0.7792 | 0.9010 | 0.0 | 0.1768 | 0.2359 | 0.0 |
| 0.4152 | 69.0 | 7383 | 0.5865 | 0.2997 | 0.3656 | 0.8304 | nan | 0.7695 | 0.9499 | 0.5623 | 0.8607 | 0.2533 | nan | 0.4527 | 0.6812 | 0.0 | 0.9392 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7690 | 0.0 | 0.0 | 0.8475 | 0.0 | 0.5371 | 0.5074 | 0.0 | nan | 0.0 | 0.3253 | 0.0 | 0.0 | 0.9251 | 0.8096 | 0.9561 | 0.0 | 0.1738 | 0.3779 | 0.0 | nan | 0.6567 | 0.8192 | 0.5492 | 0.7168 | 0.2357 | nan | 0.2922 | 0.4481 | 0.0 | 0.8000 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4081 | 0.0 | 0.0 | 0.6716 | 0.0 | 0.4078 | 0.4154 | 0.0 | nan | 0.0 | 0.2236 | 0.0 | 0.0 | 0.8347 | 0.7605 | 0.8992 | 0.0 | 0.1559 | 0.2954 | 0.0 |
| 0.3752 | 70.0 | 7490 | 0.5815 | 0.2986 | 0.3657 | 0.8312 | nan | 0.7869 | 0.9356 | 0.6206 | 0.8688 | 0.3233 | nan | 0.4608 | 0.6776 | 0.0 | 0.9427 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7906 | 0.0 | 0.0 | 0.8797 | 0.0 | 0.5149 | 0.4586 | 0.0 | nan | 0.0 | 0.3025 | 0.0 | 0.0 | 0.9155 | 0.8223 | 0.9550 | 0.0 | 0.1530 | 0.2946 | 0.0 | nan | 0.6478 | 0.8285 | 0.5954 | 0.6915 | 0.2975 | nan | 0.2928 | 0.4499 | 0.0 | 0.7924 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4008 | 0.0 | 0.0 | 0.6707 | 0.0 | 0.4119 | 0.3609 | 0.0 | nan | 0.0 | 0.2096 | 0.0 | 0.0 | 0.8349 | 0.7640 | 0.9017 | 0.0 | 0.1404 | 0.2649 | 0.0 |
| 0.4303 | 71.0 | 7597 | 0.5584 | 0.3069 | 0.3678 | 0.8376 | nan | 0.7772 | 0.9490 | 0.6739 | 0.8657 | 0.3428 | nan | 0.4455 | 0.6180 | 0.0 | 0.9438 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7038 | 0.0 | 0.0 | 0.8718 | 0.0 | 0.5481 | 0.4799 | 0.0 | nan | 0.0 | 0.2685 | 0.0 | 0.0 | 0.9220 | 0.8662 | 0.9529 | 0.0 | 0.2862 | 0.2554 | 0.0 | nan | 0.6665 | 0.8277 | 0.6340 | 0.7189 | 0.3123 | nan | 0.3026 | 0.4458 | 0.0 | 0.8048 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4185 | 0.0 | 0.0 | 0.6825 | 0.0 | 0.4252 | 0.4008 | 0.0 | nan | 0.0 | 0.1970 | 0.0 | 0.0 | 0.8378 | 0.7761 | 0.9033 | 0.0 | 0.2345 | 0.2315 | 0.0 |
| 0.4076 | 72.0 | 7704 | 0.5620 | 0.3076 | 0.3717 | 0.8355 | nan | 0.7952 | 0.9487 | 0.6617 | 0.8569 | 0.2702 | nan | 0.4229 | 0.6185 | 0.0 | 0.9411 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7300 | 0.0 | 0.0 | 0.8682 | 0.0 | 0.5422 | 0.5640 | 0.0 | nan | 0.0 | 0.2992 | 0.0 | 0.0 | 0.9085 | 0.8604 | 0.9578 | 0.0 | 0.2977 | 0.3501 | 0.0 | nan | 0.6677 | 0.8230 | 0.6294 | 0.7269 | 0.2521 | nan | 0.2996 | 0.4452 | 0.0 | 0.8073 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4243 | 0.0 | 0.0 | 0.6803 | 0.0 | 0.4240 | 0.4145 | 0.0 | nan | 0.0 | 0.2007 | 0.0 | 0.0 | 0.8393 | 0.7749 | 0.9002 | 0.0 | 0.2384 | 0.2949 | 0.0 |
| 0.3955 | 73.0 | 7811 | 0.5621 | 0.3101 | 0.3785 | 0.8339 | nan | 0.7895 | 0.9417 | 0.7490 | 0.8660 | 0.3128 | nan | 0.4477 | 0.5938 | 0.0 | 0.9339 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7367 | 0.0 | 0.0 | 0.8139 | 0.0 | 0.5560 | 0.5443 | 0.0 | nan | 0.0 | 0.3697 | 0.0 | 0.0 | 0.9193 | 0.8727 | 0.9590 | 0.0 | 0.2933 | 0.4120 | 0.0 | nan | 0.6573 | 0.8268 | 0.6815 | 0.6924 | 0.2906 | nan | 0.2931 | 0.4485 | 0.0 | 0.8106 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3907 | 0.0 | 0.0 | 0.6750 | 0.0 | 0.4267 | 0.4409 | 0.0 | nan | 0.0 | 0.2378 | 0.0 | 0.0 | 0.8327 | 0.7671 | 0.9007 | 0.0 | 0.2377 | 0.3121 | 0.0 |
| 0.4452 | 74.0 | 7918 | 0.5760 | 0.3049 | 0.3686 | 0.8320 | nan | 0.7726 | 0.9410 | 0.6627 | 0.8454 | 0.2764 | nan | 0.4363 | 0.6118 | 0.0 | 0.9354 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7153 | 0.0 | 0.0 | 0.8825 | 0.0 | 0.5270 | 0.5036 | 0.0 | nan | 0.0 | 0.2900 | 0.0 | 0.0 | 0.9235 | 0.8576 | 0.9476 | 0.0 | 0.3158 | 0.3507 | 0.0 | nan | 0.6590 | 0.8189 | 0.6276 | 0.7115 | 0.2563 | nan | 0.2929 | 0.4501 | 0.0 | 0.8114 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4173 | 0.0 | 0.0 | 0.6722 | 0.0 | 0.4165 | 0.4052 | 0.0 | nan | 0.0 | 0.2029 | 0.0 | 0.0 | 0.8327 | 0.7690 | 0.9035 | 0.0 | 0.2170 | 0.2945 | 0.0 |
| 0.3568 | 75.0 | 8025 | 0.5606 | 0.3077 | 0.3716 | 0.8363 | nan | 0.7962 | 0.9389 | 0.6796 | 0.8557 | 0.3318 | nan | 0.4446 | 0.5949 | 0.0 | 0.9376 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7570 | 0.0 | 0.0 | 0.8904 | 0.0 | 0.5460 | 0.4607 | 0.0 | nan | 0.0 | 0.2889 | 0.0 | 0.0 | 0.9088 | 0.8802 | 0.9495 | 0.0 | 0.3142 | 0.3154 | 0.0 | nan | 0.6583 | 0.8284 | 0.6421 | 0.7339 | 0.3049 | nan | 0.2952 | 0.4438 | 0.0 | 0.8132 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4285 | 0.0 | 0.0 | 0.6766 | 0.0 | 0.4371 | 0.3675 | 0.0 | nan | 0.0 | 0.1966 | 0.0 | 0.0 | 0.8377 | 0.7781 | 0.9054 | 0.0 | 0.2262 | 0.2728 | 0.0 |
| 0.4273 | 76.0 | 8132 | 0.5716 | 0.3064 | 0.3689 | 0.8359 | nan | 0.8006 | 0.9397 | 0.6296 | 0.8601 | 0.2801 | nan | 0.4445 | 0.5972 | 0.0 | 0.9332 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7044 | 0.0 | 0.0 | 0.8808 | 0.0 | 0.5899 | 0.5244 | 0.0 | nan | 0.0 | 0.2834 | 0.0 | 0.0 | 0.9226 | 0.8539 | 0.9494 | 0.0 | 0.3082 | 0.3026 | 0.0 | nan | 0.6647 | 0.8258 | 0.6010 | 0.7331 | 0.2601 | nan | 0.2925 | 0.4412 | 0.0 | 0.8204 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4267 | 0.0 | 0.0 | 0.6808 | 0.0 | 0.4422 | 0.3883 | 0.0 | nan | 0.0 | 0.2059 | 0.0 | 0.0 | 0.8376 | 0.7809 | 0.9037 | 0.0 | 0.2323 | 0.2677 | 0.0 |
| 0.3989 | 77.0 | 8239 | 0.5950 | 0.3046 | 0.3710 | 0.8298 | nan | 0.7813 | 0.9402 | 0.6772 | 0.8622 | 0.2998 | nan | 0.3972 | 0.6240 | 0.0 | 0.9352 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7485 | 0.0 | 0.0 | 0.8968 | 0.0 | 0.5384 | 0.5001 | 0.0 | nan | 0.0 | 0.3072 | 0.0 | 0.0 | 0.8772 | 0.8678 | 0.9564 | 0.0 | 0.3039 | 0.3580 | 0.0 | nan | 0.6575 | 0.8156 | 0.6366 | 0.6857 | 0.2767 | nan | 0.2826 | 0.4490 | 0.0 | 0.8130 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4133 | 0.0 | 0.0 | 0.6752 | 0.0 | 0.4315 | 0.3788 | 0.0 | nan | 0.0 | 0.2079 | 0.0 | 0.0 | 0.8275 | 0.7582 | 0.8964 | 0.0 | 0.2436 | 0.2983 | 0.0 |
| 0.4225 | 78.0 | 8346 | 0.5773 | 0.3053 | 0.3660 | 0.8346 | nan | 0.7976 | 0.9434 | 0.7047 | 0.8617 | 0.2893 | nan | 0.4078 | 0.6335 | 0.0 | 0.9320 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6728 | 0.0 | 0.0 | 0.8665 | 0.0 | 0.5345 | 0.4411 | 0.0 | nan | 0.0 | 0.2897 | 0.0 | 0.0 | 0.9188 | 0.8781 | 0.9634 | 0.0 | 0.2847 | 0.2928 | 0.0 | nan | 0.6589 | 0.8257 | 0.6567 | 0.7034 | 0.2678 | nan | 0.2897 | 0.4498 | 0.0 | 0.8247 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4262 | 0.0 | 0.0 | 0.6748 | 0.0 | 0.4348 | 0.3547 | 0.0 | nan | 0.0 | 0.2030 | 0.0 | 0.0 | 0.8333 | 0.7724 | 0.8983 | 0.0 | 0.2337 | 0.2629 | 0.0 |
| 0.3732 | 79.0 | 8453 | 0.5765 | 0.3071 | 0.3699 | 0.8335 | nan | 0.7966 | 0.9393 | 0.6529 | 0.8586 | 0.3281 | nan | 0.4279 | 0.6442 | 0.0 | 0.9438 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6990 | 0.0 | 0.0 | 0.8564 | 0.0 | 0.5440 | 0.4633 | 0.0 | nan | 0.0 | 0.3160 | 0.0 | 0.0 | 0.9170 | 0.8463 | 0.9560 | 0.0 | 0.3043 | 0.3435 | 0.0 | nan | 0.6577 | 0.8237 | 0.6301 | 0.7032 | 0.3022 | nan | 0.2923 | 0.4703 | 0.0 | 0.7838 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4134 | 0.0 | 0.0 | 0.6680 | 0.0 | 0.4317 | 0.3638 | 0.0 | nan | 0.0 | 0.2212 | 0.0 | 0.0 | 0.8389 | 0.7737 | 0.9019 | 0.0 | 0.2508 | 0.2994 | 0.0 |
| 0.406 | 80.0 | 8560 | 0.5652 | 0.3088 | 0.3702 | 0.8370 | nan | 0.7922 | 0.9442 | 0.6424 | 0.8559 | 0.3423 | nan | 0.4291 | 0.6286 | 0.0 | 0.9319 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6899 | 0.0 | 0.0 | 0.8776 | 0.0 | 0.5416 | 0.4536 | 0.0 | nan | 0.0 | 0.3157 | 0.0 | 0.0 | 0.9151 | 0.8583 | 0.9587 | 0.0 | 0.3040 | 0.3666 | 0.0 | nan | 0.6601 | 0.8284 | 0.6172 | 0.7326 | 0.3070 | nan | 0.2938 | 0.4547 | 0.0 | 0.8155 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4208 | 0.0 | 0.0 | 0.6769 | 0.0 | 0.4275 | 0.3633 | 0.0 | nan | 0.0 | 0.2149 | 0.0 | 0.0 | 0.8398 | 0.7758 | 0.9023 | 0.0 | 0.2380 | 0.3132 | 0.0 |
| 0.3805 | 81.0 | 8667 | 0.5927 | 0.3068 | 0.3739 | 0.8318 | nan | 0.7781 | 0.9398 | 0.6409 | 0.8694 | 0.2671 | nan | 0.4200 | 0.6579 | 0.0 | 0.9410 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7389 | 0.0 | 0.0 | 0.8685 | 0.0 | 0.5615 | 0.5156 | 0.0 | nan | 0.0 | 0.3313 | 0.0 | 0.0 | 0.9068 | 0.8633 | 0.9571 | 0.0 | 0.3063 | 0.4028 | 0.0 | nan | 0.6610 | 0.8161 | 0.6084 | 0.6775 | 0.2471 | nan | 0.2901 | 0.4606 | 0.0 | 0.8100 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4319 | 0.0 | 0.0 | 0.6832 | 0.0 | 0.4334 | 0.3861 | 0.0 | nan | 0.0 | 0.2187 | 0.0 | 0.0 | 0.8389 | 0.7767 | 0.9039 | 0.0 | 0.2378 | 0.3349 | 0.0 |
| 0.417 | 82.0 | 8774 | 0.5876 | 0.3059 | 0.3675 | 0.8354 | nan | 0.8054 | 0.9478 | 0.5985 | 0.8555 | 0.2401 | nan | 0.4154 | 0.6657 | 0.0 | 0.9342 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6828 | 0.0 | 0.0 | 0.8633 | 0.0 | 0.5546 | 0.4982 | 0.0 | nan | 0.0 | 0.3150 | 0.0 | 0.0 | 0.9250 | 0.8476 | 0.9540 | 0.0 | 0.2826 | 0.3741 | 0.0 | nan | 0.6661 | 0.8224 | 0.5876 | 0.7389 | 0.2260 | nan | 0.2843 | 0.4586 | 0.0 | 0.8168 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4123 | 0.0 | 0.0 | 0.6820 | 0.0 | 0.4254 | 0.3936 | 0.0 | nan | 0.0 | 0.2202 | 0.0 | 0.0 | 0.8386 | 0.7766 | 0.9026 | 0.0 | 0.2190 | 0.3189 | 0.0 |
| 0.3894 | 83.0 | 8881 | 0.5765 | 0.3087 | 0.3745 | 0.8344 | nan | 0.8097 | 0.9389 | 0.6470 | 0.8604 | 0.2726 | nan | 0.4096 | 0.6719 | 0.0 | 0.9377 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7344 | 0.0 | 0.0 | 0.8537 | 0.0 | 0.5445 | 0.5077 | 0.0 | nan | 0.0 | 0.3452 | 0.0 | 0.0 | 0.9142 | 0.8713 | 0.9550 | 0.0 | 0.2862 | 0.4235 | 0.0 | nan | 0.6627 | 0.8207 | 0.6235 | 0.7133 | 0.2534 | nan | 0.2864 | 0.4618 | 0.0 | 0.8177 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4237 | 0.0 | 0.0 | 0.6802 | 0.0 | 0.4332 | 0.3990 | 0.0 | nan | 0.0 | 0.2247 | 0.0 | 0.0 | 0.8385 | 0.7756 | 0.9046 | 0.0 | 0.2248 | 0.3333 | 0.0 |
| 0.3837 | 84.0 | 8988 | 0.6004 | 0.3036 | 0.3712 | 0.8292 | nan | 0.7741 | 0.9374 | 0.6093 | 0.8726 | 0.2691 | nan | 0.4232 | 0.6590 | 0.0 | 0.9410 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7466 | 0.0 | 0.0 | 0.8726 | 0.0 | 0.5627 | 0.4905 | 0.0 | nan | 0.0 | 0.3329 | 0.0 | 0.0 | 0.9002 | 0.8601 | 0.9551 | 0.0 | 0.3032 | 0.3703 | 0.0 | nan | 0.6535 | 0.8166 | 0.5919 | 0.6585 | 0.2514 | nan | 0.2897 | 0.4643 | 0.0 | 0.8054 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4115 | 0.0 | 0.0 | 0.6766 | 0.0 | 0.4207 | 0.3879 | 0.0 | nan | 0.0 | 0.2191 | 0.0 | 0.0 | 0.8380 | 0.7728 | 0.9020 | 0.0 | 0.2484 | 0.3078 | 0.0 |
| 0.3893 | 85.0 | 9095 | 0.5829 | 0.3084 | 0.3760 | 0.8340 | nan | 0.7840 | 0.9454 | 0.6141 | 0.8637 | 0.2819 | nan | 0.4197 | 0.6554 | 0.0 | 0.9299 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7525 | 0.0 | 0.0 | 0.8579 | 0.0 | 0.5518 | 0.5349 | 0.0 | nan | 0.0 | 0.3574 | 0.0 | 0.0 | 0.9054 | 0.8711 | 0.9585 | 0.0 | 0.3114 | 0.4356 | 0.0 | nan | 0.6558 | 0.8246 | 0.5972 | 0.7109 | 0.2607 | nan | 0.2940 | 0.4619 | 0.0 | 0.8209 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4206 | 0.0 | 0.0 | 0.6725 | 0.0 | 0.4299 | 0.4114 | 0.0 | nan | 0.0 | 0.2291 | 0.0 | 0.0 | 0.8388 | 0.7753 | 0.9033 | 0.0 | 0.2374 | 0.3259 | 0.0 |
| 0.3568 | 86.0 | 9202 | 0.5820 | 0.3080 | 0.3747 | 0.8325 | nan | 0.7903 | 0.9414 | 0.6354 | 0.8605 | 0.2640 | nan | 0.4273 | 0.6557 | 0.0 | 0.9349 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7446 | 0.0 | 0.0 | 0.8585 | 0.0 | 0.5437 | 0.5643 | 0.0 | nan | 0.0 | 0.3338 | 0.0 | 0.0 | 0.9118 | 0.8567 | 0.9552 | 0.0 | 0.3089 | 0.4020 | 0.0 | nan | 0.6572 | 0.8189 | 0.6172 | 0.7005 | 0.2476 | nan | 0.2908 | 0.4655 | 0.0 | 0.8147 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4183 | 0.0 | 0.0 | 0.6758 | 0.0 | 0.4228 | 0.4167 | 0.0 | nan | 0.0 | 0.2271 | 0.0 | 0.0 | 0.8392 | 0.7752 | 0.9050 | 0.0 | 0.2439 | 0.3209 | 0.0 |
| 0.3812 | 87.0 | 9309 | 0.5807 | 0.3103 | 0.3773 | 0.8331 | nan | 0.7822 | 0.9393 | 0.7419 | 0.8598 | 0.2849 | nan | 0.4434 | 0.6552 | 0.0 | 0.9438 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6996 | 0.0 | 0.0 | 0.8562 | 0.0 | 0.5577 | 0.5600 | 0.0 | nan | 0.0 | 0.3260 | 0.0 | 0.0 | 0.9138 | 0.8579 | 0.9549 | 0.0 | 0.3072 | 0.3889 | 0.0 | nan | 0.6617 | 0.8195 | 0.6735 | 0.6867 | 0.2661 | nan | 0.2962 | 0.4633 | 0.0 | 0.8066 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4183 | 0.0 | 0.0 | 0.6738 | 0.0 | 0.4195 | 0.4302 | 0.0 | nan | 0.0 | 0.2260 | 0.0 | 0.0 | 0.8401 | 0.7766 | 0.9048 | 0.0 | 0.2449 | 0.3213 | 0.0 |
| 0.3861 | 88.0 | 9416 | 0.5830 | 0.3086 | 0.3753 | 0.8326 | nan | 0.7898 | 0.9402 | 0.6848 | 0.8620 | 0.2609 | nan | 0.4195 | 0.6699 | 0.0 | 0.9395 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7112 | 0.0 | 0.0 | 0.8726 | 0.0 | 0.5395 | 0.5290 | 0.0 | nan | 0.0 | 0.3415 | 0.0 | 0.0 | 0.9041 | 0.8616 | 0.9560 | 0.0 | 0.3120 | 0.4148 | 0.0 | nan | 0.6604 | 0.8180 | 0.6453 | 0.6906 | 0.2451 | nan | 0.2897 | 0.4600 | 0.0 | 0.8129 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4175 | 0.0 | 0.0 | 0.6771 | 0.0 | 0.4296 | 0.4125 | 0.0 | nan | 0.0 | 0.2268 | 0.0 | 0.0 | 0.8392 | 0.7765 | 0.9045 | 0.0 | 0.2408 | 0.3276 | 0.0 |
| 0.3794 | 89.0 | 9523 | 0.5810 | 0.3097 | 0.3749 | 0.8352 | nan | 0.7929 | 0.9458 | 0.6782 | 0.8640 | 0.2715 | nan | 0.4132 | 0.6713 | 0.0 | 0.9444 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6702 | 0.0 | 0.0 | 0.8568 | 0.0 | 0.5447 | 0.5202 | 0.0 | nan | 0.0 | 0.3659 | 0.0 | 0.0 | 0.9108 | 0.8647 | 0.9536 | 0.0 | 0.3065 | 0.4218 | 0.0 | nan | 0.6601 | 0.8260 | 0.6376 | 0.7196 | 0.2537 | nan | 0.2907 | 0.4658 | 0.0 | 0.7987 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4120 | 0.0 | 0.0 | 0.6758 | 0.0 | 0.4326 | 0.4139 | 0.0 | nan | 0.0 | 0.2351 | 0.0 | 0.0 | 0.8374 | 0.7762 | 0.9056 | 0.0 | 0.2380 | 0.3327 | 0.0 |
| 0.3862 | 90.0 | 9630 | 0.5829 | 0.3101 | 0.3741 | 0.8360 | nan | 0.7848 | 0.9513 | 0.6485 | 0.8585 | 0.2785 | nan | 0.4155 | 0.6589 | 0.0 | 0.9368 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7153 | 0.0 | 0.0 | 0.8629 | 0.0 | 0.5388 | 0.5348 | 0.0 | nan | 0.0 | 0.3483 | 0.0 | 0.0 | 0.9114 | 0.8631 | 0.9563 | 0.0 | 0.3089 | 0.3983 | 0.0 | nan | 0.6612 | 0.8226 | 0.6249 | 0.7241 | 0.2594 | nan | 0.2904 | 0.4605 | 0.0 | 0.8116 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4253 | 0.0 | 0.0 | 0.6824 | 0.0 | 0.4310 | 0.4208 | 0.0 | nan | 0.0 | 0.2285 | 0.0 | 0.0 | 0.8396 | 0.7744 | 0.9050 | 0.0 | 0.2350 | 0.3275 | 0.0 |
| 0.3932 | 91.0 | 9737 | 0.5930 | 0.3098 | 0.3750 | 0.8355 | nan | 0.7909 | 0.9519 | 0.6784 | 0.8521 | 0.2627 | nan | 0.4139 | 0.6617 | 0.0 | 0.9373 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7455 | 0.0 | 0.0 | 0.8494 | 0.0 | 0.5452 | 0.5494 | 0.0 | nan | 0.0 | 0.3350 | 0.0 | 0.0 | 0.9168 | 0.8574 | 0.9575 | 0.0 | 0.3072 | 0.3890 | 0.0 | nan | 0.6640 | 0.8206 | 0.6375 | 0.7287 | 0.2458 | nan | 0.2891 | 0.4614 | 0.0 | 0.8079 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4162 | 0.0 | 0.0 | 0.6814 | 0.0 | 0.4299 | 0.4185 | 0.0 | nan | 0.0 | 0.2279 | 0.0 | 0.0 | 0.8395 | 0.7741 | 0.9051 | 0.0 | 0.2450 | 0.3209 | 0.0 |
| 0.3772 | 92.0 | 9844 | 0.5853 | 0.3107 | 0.3765 | 0.8360 | nan | 0.7945 | 0.9480 | 0.7226 | 0.8569 | 0.2856 | nan | 0.4213 | 0.6610 | 0.0 | 0.9426 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7424 | 0.0 | 0.0 | 0.8737 | 0.0 | 0.5441 | 0.5365 | 0.0 | nan | 0.0 | 0.3261 | 0.0 | 0.0 | 0.9066 | 0.8428 | 0.9565 | 0.0 | 0.3083 | 0.3800 | 0.0 | nan | 0.6627 | 0.8247 | 0.6681 | 0.7259 | 0.2658 | nan | 0.2930 | 0.4638 | 0.0 | 0.8004 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4139 | 0.0 | 0.0 | 0.6795 | 0.0 | 0.4294 | 0.4075 | 0.0 | nan | 0.0 | 0.2218 | 0.0 | 0.0 | 0.8407 | 0.7744 | 0.9051 | 0.0 | 0.2460 | 0.3197 | 0.0 |
| 0.3886 | 93.0 | 9951 | 0.5873 | 0.3101 | 0.3758 | 0.8353 | nan | 0.7940 | 0.9477 | 0.6976 | 0.8584 | 0.2758 | nan | 0.4271 | 0.6610 | 0.0 | 0.9388 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7573 | 0.0 | 0.0 | 0.8781 | 0.0 | 0.5376 | 0.5151 | 0.0 | nan | 0.0 | 0.3348 | 0.0 | 0.0 | 0.9022 | 0.8517 | 0.9536 | 0.0 | 0.3092 | 0.3870 | 0.0 | nan | 0.6608 | 0.8232 | 0.6593 | 0.7203 | 0.2580 | nan | 0.2931 | 0.4603 | 0.0 | 0.8067 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4182 | 0.0 | 0.0 | 0.6808 | 0.0 | 0.4315 | 0.4049 | 0.0 | nan | 0.0 | 0.2203 | 0.0 | 0.0 | 0.8401 | 0.7743 | 0.9055 | 0.0 | 0.2425 | 0.3223 | 0.0 |
| 0.3732 | 93.4579 | 10000 | 0.5890 | 0.3100 | 0.3785 | 0.8340 | nan | 0.7750 | 0.9476 | 0.7198 | 0.8649 | 0.2833 | nan | 0.4461 | 0.6623 | 0.0 | 0.9432 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7612 | 0.0 | 0.0 | 0.8657 | 0.0 | 0.5605 | 0.5325 | 0.0 | nan | 0.0 | 0.3362 | 0.0 | 0.0 | 0.8960 | 0.8673 | 0.9590 | 0.0 | 0.3077 | 0.3834 | 0.0 | nan | 0.6597 | 0.8220 | 0.6669 | 0.7094 | 0.2634 | nan | 0.2962 | 0.4612 | 0.0 | 0.8019 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4166 | 0.0 | 0.0 | 0.6806 | 0.0 | 0.4268 | 0.4087 | 0.0 | nan | 0.0 | 0.2241 | 0.0 | 0.0 | 0.8381 | 0.7739 | 0.9036 | 0.0 | 0.2473 | 0.3211 | 0.0 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
juhw/uiop92 | juhw | "2025-03-12T21:25:33Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-12T21:21:40Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ihsanahakiim/videomae-base-finetuned-signlanguage | ihsanahakiim | "2025-03-02T06:22:58Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | "2025-03-02T03:21:39Z" | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-signlanguage
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-signlanguage
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3358
- Accuracy: 0.6567
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 3560
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:--------:|:----:|:---------------:|:--------:|
| 4.2372 | 0.0065 | 23 | 4.2439 | 0.0186 |
| 4.2325 | 1.0065 | 46 | 4.2369 | 0.0186 |
| 4.1795 | 2.0065 | 69 | 4.2250 | 0.0233 |
| 4.211 | 3.0065 | 92 | 4.2141 | 0.0140 |
| 4.2361 | 4.0065 | 115 | 4.2051 | 0.0372 |
| 4.2524 | 5.0065 | 138 | 4.2052 | 0.0233 |
| 4.2297 | 6.0065 | 161 | 4.2014 | 0.0186 |
| 4.2596 | 7.0065 | 184 | 4.1950 | 0.0233 |
| 4.1911 | 8.0065 | 207 | 4.1898 | 0.0326 |
| 4.184 | 9.0065 | 230 | 4.1860 | 0.0279 |
| 4.1731 | 10.0065 | 253 | 4.1680 | 0.0419 |
| 4.098 | 11.0065 | 276 | 4.1413 | 0.0512 |
| 4.1105 | 12.0065 | 299 | 4.1351 | 0.0279 |
| 4.1477 | 13.0065 | 322 | 4.1182 | 0.0372 |
| 4.0121 | 14.0065 | 345 | 4.0324 | 0.0605 |
| 3.8587 | 15.0065 | 368 | 3.9973 | 0.0512 |
| 3.8878 | 16.0065 | 391 | 3.9077 | 0.0651 |
| 3.7205 | 17.0065 | 414 | 3.8901 | 0.0698 |
| 3.6613 | 18.0065 | 437 | 3.7347 | 0.1349 |
| 3.5438 | 19.0065 | 460 | 3.6275 | 0.1488 |
| 3.4033 | 20.0065 | 483 | 3.5495 | 0.1442 |
| 3.2043 | 21.0065 | 506 | 3.5864 | 0.1349 |
| 3.1477 | 22.0065 | 529 | 3.4515 | 0.1674 |
| 3.0344 | 23.0065 | 552 | 3.3110 | 0.2233 |
| 2.9459 | 24.0065 | 575 | 3.2645 | 0.2605 |
| 2.6629 | 25.0065 | 598 | 3.1746 | 0.2558 |
| 2.764 | 26.0065 | 621 | 3.0833 | 0.3163 |
| 2.4924 | 27.0065 | 644 | 2.9918 | 0.3023 |
| 2.6696 | 28.0065 | 667 | 3.0009 | 0.3349 |
| 2.4616 | 29.0065 | 690 | 2.8396 | 0.4 |
| 2.2084 | 30.0065 | 713 | 2.8039 | 0.3674 |
| 2.3011 | 31.0065 | 736 | 2.7465 | 0.4 |
| 2.1059 | 32.0065 | 759 | 2.6865 | 0.4140 |
| 2.0525 | 33.0065 | 782 | 2.6070 | 0.4326 |
| 2.1054 | 34.0065 | 805 | 2.6387 | 0.3953 |
| 1.8791 | 35.0065 | 828 | 2.5539 | 0.4326 |
| 1.7834 | 36.0065 | 851 | 2.4750 | 0.4326 |
| 1.5749 | 37.0065 | 874 | 2.4880 | 0.4233 |
| 1.6162 | 38.0065 | 897 | 2.3581 | 0.4884 |
| 1.5611 | 39.0065 | 920 | 2.2846 | 0.5256 |
| 1.5449 | 40.0065 | 943 | 2.2999 | 0.5116 |
| 1.6013 | 41.0065 | 966 | 2.2349 | 0.5302 |
| 1.3959 | 42.0065 | 989 | 2.1957 | 0.5488 |
| 1.1607 | 43.0065 | 1012 | 2.1559 | 0.5209 |
| 1.2663 | 44.0065 | 1035 | 2.1192 | 0.5721 |
| 1.0869 | 45.0065 | 1058 | 2.0522 | 0.5535 |
| 1.2477 | 46.0065 | 1081 | 2.0597 | 0.5767 |
| 0.9665 | 47.0065 | 1104 | 2.0365 | 0.5535 |
| 1.0521 | 48.0065 | 1127 | 1.9669 | 0.5767 |
| 0.8273 | 49.0065 | 1150 | 1.9918 | 0.5907 |
| 0.8396 | 50.0065 | 1173 | 1.9576 | 0.5860 |
| 0.9543 | 51.0065 | 1196 | 1.9371 | 0.5953 |
| 0.8095 | 52.0065 | 1219 | 1.8800 | 0.5953 |
| 0.7694 | 53.0065 | 1242 | 1.8737 | 0.6 |
| 0.8243 | 54.0065 | 1265 | 1.8846 | 0.6093 |
| 0.6632 | 55.0065 | 1288 | 1.8230 | 0.6 |
| 0.7446 | 56.0065 | 1311 | 1.7898 | 0.6093 |
| 0.7044 | 57.0065 | 1334 | 1.7740 | 0.5907 |
| 0.6732 | 58.0065 | 1357 | 1.8061 | 0.6047 |
| 0.5786 | 59.0065 | 1380 | 1.7060 | 0.6186 |
| 0.6348 | 60.0065 | 1403 | 1.7004 | 0.6140 |
| 0.5706 | 61.0065 | 1426 | 1.7013 | 0.6279 |
| 0.5007 | 62.0065 | 1449 | 1.6992 | 0.6186 |
| 0.5078 | 63.0065 | 1472 | 1.6649 | 0.6047 |
| 0.5048 | 64.0065 | 1495 | 1.6449 | 0.6140 |
| 0.4526 | 65.0065 | 1518 | 1.6256 | 0.6279 |
| 0.504 | 66.0065 | 1541 | 1.6401 | 0.6372 |
| 0.3824 | 67.0065 | 1564 | 1.5941 | 0.6093 |
| 0.453 | 68.0065 | 1587 | 1.6236 | 0.6186 |
| 0.3618 | 69.0065 | 1610 | 1.6100 | 0.6186 |
| 0.3689 | 70.0065 | 1633 | 1.5488 | 0.6419 |
| 0.3545 | 71.0065 | 1656 | 1.5390 | 0.6465 |
| 0.4126 | 72.0065 | 1679 | 1.5287 | 0.6558 |
| 0.2734 | 73.0065 | 1702 | 1.4978 | 0.6465 |
| 0.3144 | 74.0065 | 1725 | 1.5038 | 0.6326 |
| 0.3152 | 75.0065 | 1748 | 1.5692 | 0.6326 |
| 0.371 | 76.0065 | 1771 | 1.5331 | 0.6558 |
| 0.3033 | 77.0065 | 1794 | 1.4733 | 0.6465 |
| 0.2574 | 78.0065 | 1817 | 1.5694 | 0.5907 |
| 0.2562 | 79.0065 | 1840 | 1.5097 | 0.6279 |
| 0.2162 | 80.0065 | 1863 | 1.4782 | 0.6512 |
| 0.2493 | 81.0065 | 1886 | 1.4350 | 0.6465 |
| 0.2173 | 82.0065 | 1909 | 1.4730 | 0.6093 |
| 0.2508 | 83.0065 | 1932 | 1.4735 | 0.6186 |
| 0.1932 | 84.0065 | 1955 | 1.4491 | 0.6326 |
| 0.1822 | 85.0065 | 1978 | 1.4155 | 0.6326 |
| 0.2051 | 86.0065 | 2001 | 1.4431 | 0.6419 |
| 0.2269 | 87.0065 | 2024 | 1.4029 | 0.6419 |
| 0.1747 | 88.0065 | 2047 | 1.4643 | 0.6233 |
| 0.1464 | 89.0065 | 2070 | 1.3921 | 0.6558 |
| 0.1642 | 90.0065 | 2093 | 1.4033 | 0.6512 |
| 0.1582 | 91.0065 | 2116 | 1.3728 | 0.6512 |
| 0.1641 | 92.0065 | 2139 | 1.3756 | 0.6326 |
| 0.1292 | 93.0065 | 2162 | 1.3731 | 0.6512 |
| 0.1285 | 94.0065 | 2185 | 1.3559 | 0.6698 |
| 0.1405 | 95.0065 | 2208 | 1.4126 | 0.6233 |
| 0.1299 | 96.0065 | 2231 | 1.3524 | 0.6419 |
| 0.1166 | 97.0065 | 2254 | 1.3812 | 0.6512 |
| 0.1434 | 98.0065 | 2277 | 1.4055 | 0.6279 |
| 0.1748 | 99.0065 | 2300 | 1.3894 | 0.6558 |
| 0.0999 | 100.0065 | 2323 | 1.3665 | 0.6326 |
| 0.1361 | 101.0065 | 2346 | 1.3776 | 0.6372 |
| 0.118 | 102.0065 | 2369 | 1.3635 | 0.6558 |
| 0.0996 | 103.0065 | 2392 | 1.3477 | 0.6512 |
| 0.1232 | 104.0065 | 2415 | 1.3550 | 0.6419 |
| 0.0783 | 105.0065 | 2438 | 1.3460 | 0.6233 |
| 0.1517 | 106.0065 | 2461 | 1.3527 | 0.6279 |
| 0.1007 | 107.0065 | 2484 | 1.3040 | 0.6465 |
| 0.1036 | 108.0065 | 2507 | 1.3216 | 0.6698 |
| 0.1085 | 109.0065 | 2530 | 1.2975 | 0.6326 |
| 0.0691 | 110.0065 | 2553 | 1.3401 | 0.6512 |
| 0.1231 | 111.0065 | 2576 | 1.3251 | 0.6372 |
| 0.0801 | 112.0065 | 2599 | 1.3120 | 0.6605 |
| 0.0784 | 113.0065 | 2622 | 1.3061 | 0.6605 |
| 0.0891 | 114.0065 | 2645 | 1.2882 | 0.6558 |
| 0.0792 | 115.0065 | 2668 | 1.3531 | 0.6558 |
| 0.0772 | 116.0065 | 2691 | 1.3200 | 0.6698 |
| 0.1068 | 117.0065 | 2714 | 1.3186 | 0.6744 |
| 0.0711 | 118.0065 | 2737 | 1.3067 | 0.6419 |
| 0.0982 | 119.0065 | 2760 | 1.3161 | 0.6512 |
| 0.0741 | 120.0065 | 2783 | 1.3029 | 0.6512 |
| 0.1507 | 121.0065 | 2806 | 1.3406 | 0.6605 |
| 0.0602 | 122.0065 | 2829 | 1.3187 | 0.6558 |
| 0.0748 | 123.0065 | 2852 | 1.2874 | 0.6605 |
| 0.0638 | 124.0065 | 2875 | 1.2871 | 0.6791 |
| 0.0915 | 125.0065 | 2898 | 1.2869 | 0.6465 |
| 0.0749 | 126.0065 | 2921 | 1.2859 | 0.6558 |
| 0.0717 | 127.0065 | 2944 | 1.3222 | 0.6372 |
| 0.0539 | 128.0065 | 2967 | 1.3263 | 0.6326 |
| 0.0488 | 129.0065 | 2990 | 1.2945 | 0.6512 |
| 0.0696 | 130.0065 | 3013 | 1.2636 | 0.6698 |
| 0.0665 | 131.0065 | 3036 | 1.2910 | 0.6698 |
| 0.0562 | 132.0065 | 3059 | 1.2820 | 0.6558 |
| 0.0527 | 133.0065 | 3082 | 1.2927 | 0.6651 |
| 0.057 | 134.0065 | 3105 | 1.2846 | 0.6558 |
| 0.0764 | 135.0065 | 3128 | 1.3104 | 0.6651 |
| 0.0805 | 136.0065 | 3151 | 1.3110 | 0.6465 |
| 0.0503 | 137.0065 | 3174 | 1.3107 | 0.6558 |
| 0.0629 | 138.0065 | 3197 | 1.2915 | 0.6465 |
| 0.0491 | 139.0065 | 3220 | 1.2753 | 0.6558 |
| 0.0456 | 140.0065 | 3243 | 1.3105 | 0.6605 |
| 0.053 | 141.0065 | 3266 | 1.2686 | 0.6558 |
| 0.051 | 142.0065 | 3289 | 1.2831 | 0.6651 |
| 0.0669 | 143.0065 | 3312 | 1.2852 | 0.6698 |
| 0.0445 | 144.0065 | 3335 | 1.2868 | 0.6651 |
| 0.0414 | 145.0065 | 3358 | 1.2806 | 0.6698 |
| 0.0575 | 146.0065 | 3381 | 1.2815 | 0.6651 |
| 0.0392 | 147.0065 | 3404 | 1.2596 | 0.6558 |
| 0.0898 | 148.0065 | 3427 | 1.2666 | 0.6698 |
| 0.0617 | 149.0065 | 3450 | 1.2629 | 0.6605 |
| 0.0453 | 150.0065 | 3473 | 1.2797 | 0.6651 |
| 0.0398 | 151.0065 | 3496 | 1.2698 | 0.6558 |
| 0.0435 | 152.0065 | 3519 | 1.2721 | 0.6558 |
| 0.0507 | 153.0065 | 3542 | 1.2745 | 0.6512 |
| 0.0525 | 154.0051 | 3560 | 1.2777 | 0.6512 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.0.1+cu118
- Datasets 3.3.2
- Tokenizers 0.21.0
|
abdullahfurquan/mistral_instruct_generation_own_data | abdullahfurquan | "2024-04-16T10:58:52Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null | "2024-04-16T10:29:44Z" | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.1
datasets:
- generator
model-index:
- name: mistral_instruct_generation_own_data
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_instruct_generation_own_data
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4734
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.6476 | 2.5 | 20 | 0.4816 |
| 0.3414 | 5.0 | 40 | 0.3842 |
| 0.2565 | 7.5 | 60 | 0.3931 |
| 0.1973 | 10.0 | 80 | 0.4198 |
| 0.1245 | 12.5 | 100 | 0.4734 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
espnet/kan-bayashi_vctk_tts_train_gst_xvector_conformer_fastspeech2_transform-truncated-e051a9 | espnet | "2021-07-03T15:00:13Z" | 1 | 0 | espnet | [
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | text-to-speech | "2022-03-02T23:29:05Z" | ---
tags:
- espnet
- audio
- text-to-speech
language: en
datasets:
- vctk
license: cc-by-4.0
---
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_tts_train_gst+xvector_conformer_fastspeech2_transformer_teacher_raw_phn_tacotron_g2p_en_no_space_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4394608/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Preeda/custom-resnet18-model-1 | Preeda | "2024-06-05T03:30:45Z" | 79 | 0 | transformers | [
"transformers",
"safetensors",
"pytorch_model_hub_mixin",
"model_hub_mixin",
"endpoints_compatible",
"region:us"
] | null | "2024-06-05T03:30:36Z" | ---
tags:
- pytorch_model_hub_mixin
- model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
eageringdev/c76a542b-056b-4a05-913b-6b4f0a86ac4e | eageringdev | "2025-02-07T22:36:12Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:llamafactory/tiny-random-Llama-3",
"base_model:adapter:llamafactory/tiny-random-Llama-3",
"license:apache-2.0",
"region:us"
] | null | "2025-02-07T22:30:03Z" | ---
library_name: peft
license: apache-2.0
base_model: llamafactory/tiny-random-Llama-3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c76a542b-056b-4a05-913b-6b4f0a86ac4e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: llamafactory/tiny-random-Llama-3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f5b2c02e5839d40f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f5b2c02e5839d40f_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: eageringdev/c76a542b-056b-4a05-913b-6b4f0a86ac4e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 1540
micro_batch_size: 2
mlflow_experiment_name: /tmp/f5b2c02e5839d40f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6f786248-2197-4023-a381-b548ed19bd9d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 6f786248-2197-4023-a381-b548ed19bd9d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c76a542b-056b-4a05-913b-6b4f0a86ac4e
This model is a fine-tuned version of [llamafactory/tiny-random-Llama-3](https://huggingface.co/llamafactory/tiny-random-Llama-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.7305
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 1540
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 11.7691 | 0.0000 | 1 | 11.7650 |
| 11.7368 | 0.0158 | 385 | 11.7344 |
| 11.7267 | 0.0316 | 770 | 11.7316 |
| 11.7243 | 0.0475 | 1155 | 11.7306 |
| 11.7302 | 0.0633 | 1540 | 11.7305 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
abrotech/ALM-Math-Latex | abrotech | "2025-04-01T15:33:41Z" | 0 | 0 | transformers | [
"transformers",
"qwen2_5_vl",
"feature-extraction",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2025-04-01T15:33:37Z" | Temporary Redirect. Redirecting to /api/resolve-cache/models/abrotech/ALM-Math-Latex/f0072a5a8add59e25ce593dc887b55e1ad6b23cb/README.md?%2Fabrotech%2FALM-Math-Latex%2Fresolve%2Fmain%2FREADME.md=&etag=%22dc4bb7fa4c9724ee0ade7c2092cac75905ff74c4%22 |
bigband/VisionarySekhmet | bigband | "2025-04-13T20:31:25Z" | 0 | 0 | null | [
"safetensors",
"qwen2",
"region:us"
] | null | "2025-04-13T20:20:49Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
samoline/4fb6091d-d88a-49fe-9d4c-ad3b17a39a59 | samoline | "2025-02-26T03:59:38Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Llama-2-7b-128k",
"base_model:adapter:NousResearch/Yarn-Llama-2-7b-128k",
"region:us"
] | null | "2025-02-26T03:51:58Z" | ---
library_name: peft
base_model: NousResearch/Yarn-Llama-2-7b-128k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4fb6091d-d88a-49fe-9d4c-ad3b17a39a59
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Llama-2-7b-128k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- bd07874fa96e3b1a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/bd07874fa96e3b1a_train_data.json
type:
field_input: description
field_instruction: input persona
field_output: synthesized text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: false
group_by_length: false
hub_model_id: samoline/4fb6091d-d88a-49fe-9d4c-ad3b17a39a59
hub_repo: samoline
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 4
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 4
lora_target_linear: true
lr_scheduler: cosine
max_steps: 2
micro_batch_size: 1
mlflow_experiment_name: /tmp/bd07874fa96e3b1a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: samoline-nan
wandb_mode: online
wandb_name: c36f8c49-e5a9-4577-b0b9-4685f343695c
wandb_project: Gradients-On-Demand
wandb_run: dev
wandb_runid: c36f8c49-e5a9-4577-b0b9-4685f343695c
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 4fb6091d-d88a-49fe-9d4c-ad3b17a39a59
This model is a fine-tuned version of [NousResearch/Yarn-Llama-2-7b-128k](https://huggingface.co/NousResearch/Yarn-Llama-2-7b-128k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2674
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.456 | 0.0000 | 1 | 1.2673 |
| 1.3132 | 0.0000 | 2 | 1.2674 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
edixo/road_good_damaged_condition | edixo | "2021-07-05T14:43:15Z" | 84 | 5 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2022-03-02T23:29:05Z" | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: road_good_damaged_condition
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9583333134651184
---
# road_good_damaged_condition
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### damaged road

#### good road
 |
kujirahand/whisper-ja | kujirahand | "2023-02-19T02:59:53Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-02-16T10:15:04Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-ja
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-ja
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4305
- Wer: 22.5625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 0.01 | 5 | 1.8992 | 29.5429 |
| No log | 0.01 | 10 | 1.4305 | 22.5625 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.6.1
- Tokenizers 0.13.2
|
mradermacher/symptom-check-april-3-GGUF | mradermacher | "2024-12-27T09:09:46Z" | 83 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"base_model:akhileshav8/symptom-check-april-3",
"base_model:quantized:akhileshav8/symptom-check-april-3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-12-27T09:08:55Z" | ---
base_model: akhileshav8/symptom-check-april-3
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/akhileshav8/symptom-check-april-3
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/symptom-check-april-3-GGUF/resolve/main/symptom-check-april-3.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/symptom-check-april-3-GGUF/resolve/main/symptom-check-april-3.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/symptom-check-april-3-GGUF/resolve/main/symptom-check-april-3.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/symptom-check-april-3-GGUF/resolve/main/symptom-check-april-3.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/symptom-check-april-3-GGUF/resolve/main/symptom-check-april-3.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/symptom-check-april-3-GGUF/resolve/main/symptom-check-april-3.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/symptom-check-april-3-GGUF/resolve/main/symptom-check-april-3.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/symptom-check-april-3-GGUF/resolve/main/symptom-check-april-3.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/symptom-check-april-3-GGUF/resolve/main/symptom-check-april-3.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/symptom-check-april-3-GGUF/resolve/main/symptom-check-april-3.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/symptom-check-april-3-GGUF/resolve/main/symptom-check-april-3.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/symptom-check-april-3-GGUF/resolve/main/symptom-check-april-3.f16.gguf) | f16 | 0.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
haonan-li/bactrian-id-bloom-7b1-lora | haonan-li | "2023-06-13T13:28:40Z" | 0 | 0 | null | [
"arxiv:2305.15011",
"license:mit",
"region:us"
] | null | "2023-06-13T13:28:28Z" | ---
license: mit
---
This repo contains a low-rank adapter (LoRA) for BLOOM-7b1
fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca)
and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in Indonesian.
### Dataset Creation
1. English Instructions: The English instuctions are obtained from [alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data).
2. Instruction Translation: The instructions (and inputs) are translated into the target languages using Google Translation API (conducted on April 2023).
3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023).
<h3 align="center">
<img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/BactrianX_dataset.jpg" width="950" align="center">
</h3>
### Training Parameters
The code for training the model is provided in our [github](https://github.com/mbzuai-nlp/Bactrian-X), which is adapted from [Alpaca-LoRA](https://github.com/tloen/alpaca-lora).
This version of the weights was trained with the following hyperparameters:
- Epochs: 8
- Batch size: 128
- Cutoff length: 1024
- Learning rate: 3e-4
- Lora _r_: 16
- Lora target modules: query_key_value
That is:
```
python finetune.py \
--base_model='bigscience/bloom-7b1' \
--num_epochs=5 \
--cutoff_len=1024 \
--group_by_length \
--output_dir='./bactrian-id-bloom-7b1-lora' \
--lora_target_modules='query_key_value' \
--lora_r=16 \
--micro_batch_size=32
```
Instructions for running it can be found at https://github.com/MBZUAI-nlp/Bactrian-X.
### Discussion of Biases
(1) Translation bias; (2) Potential English-culture bias in the translated dataset.
### Citation Information
```
@misc{li2023bactrianx,
title={Bactrian-X : A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation},
author={Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin},
year={2023},
eprint={2305.15011},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
srvmishra832/emotions-dataset-distilbert-base-uncased | srvmishra832 | "2025-03-27T13:53:29Z" | 1 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:dair-ai/emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-03-09T07:08:30Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>503</h1>
<p>We had to rate limit you. To continue using our service, please log in or create an account.</p>
</div>
</main>
</body>
</html> |
MinHyeong/dolly-v2-7b_focal04 | MinHyeong | "2025-03-30T16:15:33Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-30T16:09:27Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
lukasfast/Teuken-7B-instruct-research-v0.4-Q4_K_M-GGUF | lukasfast | "2025-01-22T18:12:21Z" | 36 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"de",
"bg",
"cs",
"da",
"el",
"en",
"es",
"et",
"fi",
"fr",
"ga",
"hr",
"hu",
"it",
"lt",
"lv",
"mt",
"nl",
"pl",
"pt",
"ro",
"sl",
"sv",
"sk",
"base_model:openGPT-X/Teuken-7B-instruct-research-v0.4",
"base_model:quantized:openGPT-X/Teuken-7B-instruct-research-v0.4",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-22T18:11:56Z" | ---
language:
- de
- bg
- cs
- da
- el
- en
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sl
- sv
- sk
metrics:
- accuracy
- bleu
pipeline_tag: text-generation
library_name: transformers
base_model: openGPT-X/Teuken-7B-instruct-research-v0.4
license: other
tags:
- llama-cpp
- gguf-my-repo
---
# lukasfast/Teuken-7B-instruct-research-v0.4-Q4_K_M-GGUF
This model was converted to GGUF format from [`openGPT-X/Teuken-7B-instruct-research-v0.4`](https://huggingface.co/openGPT-X/Teuken-7B-instruct-research-v0.4) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/openGPT-X/Teuken-7B-instruct-research-v0.4) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo lukasfast/Teuken-7B-instruct-research-v0.4-Q4_K_M-GGUF --hf-file teuken-7b-instruct-research-v0.4-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo lukasfast/Teuken-7B-instruct-research-v0.4-Q4_K_M-GGUF --hf-file teuken-7b-instruct-research-v0.4-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo lukasfast/Teuken-7B-instruct-research-v0.4-Q4_K_M-GGUF --hf-file teuken-7b-instruct-research-v0.4-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo lukasfast/Teuken-7B-instruct-research-v0.4-Q4_K_M-GGUF --hf-file teuken-7b-instruct-research-v0.4-q4_k_m.gguf -c 2048
```
|
xiaozhangMJXXZ/SEX-lora-all | xiaozhangMJXXZ | "2023-01-26T07:25:32Z" | 0 | 62 | null | [
"region:us"
] | null | "2023-01-22T17:47:32Z" | ERROR: type should be string, got "\nhttps://t.me/+a-k8rVfjIVk3NGU1 \nhttps://t.me/loraeveryone\n这是tg群组,之后会在第一时间更新tg,因为tg可以直接传tg原文件呜呜呜,笑脸站会缓慢更新!\n笑脸上下载不下来的也可以直接来tg下载\n这里是色色的lora合集,希望各位可以及时来补充!!! \n分别为打包全下载与单个角色,由于中文名字的文件无法下载所以是压缩包的形式,下载之后需要各位解压一下里面就有对应的中文名字了。 校 长的联系方式:qq3062945846\n\n只是为了方便中文玩家而搬运整理!!\n\n有目录的截图小伙伴们可以参照!\n\n我们十分尊敬每一位lora的作者!!\n\n感谢你们的付出!!\n\n大家好这里是校长,目前这边准备来整合质量高些的lora模型, 已经是整理了70+并且给打上了中文标注以及把触发tag直接打到了文件名字上, 有些复杂的衣物装饰什么的还在旁边附带了同名的文档可以方便查阅。 如果大家有比较好的且跟目前的不同的lora的话, 希望可以来找咱发下Lora模型, 我把它们全部都统一整理完之后进行分类整理并且分享给大家(是lora模型哦,不是平常的大模型)。" |
carnival13/xnli-non_gst-mbert-b-cased | carnival13 | "2025-04-14T18:06:19Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-02-19T23:12:26Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
freshpearYoon/largev2_temp | freshpearYoon | "2024-02-05T05:44:16Z" | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"ko",
"base_model:openai/whisper-large-v2",
"base_model:finetune:openai/whisper-large-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-01-29T13:15:35Z" | ---
language:
- ko
license: apache-2.0
base_model: openai/whisper-large-v2
tags:
- hf-asr-leaderboard
- generated_from_trainer
model-index:
- name: whisper_finetune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper_finetune
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the aihub_1_15 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.16.1
- Tokenizers 0.15.1
|
raajkumar16/english-tamil-colloquial-translator | raajkumar16 | "2025-02-17T06:06:51Z" | 2 | 0 | peft | [
"peft",
"safetensors",
"llama",
"unsloth",
"generated_from_trainer",
"base_model:unsloth/tinyllama-chat-bnb-4bit",
"base_model:adapter:unsloth/tinyllama-chat-bnb-4bit",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-12T08:21:04Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/tinyllama-chat-bnb-4bit
tags:
- unsloth
- generated_from_trainer
model-index:
- name: english-tamil-colloquial-translator
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# english-tamil-colloquial-translator
This model is a fine-tuned version of [unsloth/tinyllama-chat-bnb-4bit](https://huggingface.co/unsloth/tinyllama-chat-bnb-4bit) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 9.4556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 14.8974 | 2.0 | 2 | 9.4556 |
| 14.8974 | 4.0 | 4 | 9.4556 |
| 14.8974 | 6.0 | 6 | 9.4556 |
| 14.8974 | 8.0 | 8 | 9.4556 |
| 14.8974 | 10.0 | 10 | 9.4556 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.48.3
- Pytorch 2.6.0+cu124
- Datasets 3.3.0
- Tokenizers 0.21.0 |
PHL99/Reinforce-Pixelcopter-PLE-v0 | PHL99 | "2023-09-29T19:19:25Z" | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-09-08T22:31:30Z" | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 24.60 +/- 13.43
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
jmailman/jb_mailman_marketing_mail | jmailman | "2024-03-18T17:39:11Z" | 3 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-03-18T04:08:03Z" | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kallilikhitha123/llama-Quantized-Model-8B_10lakh_03-03-2025 | kallilikhitha123 | "2025-03-03T11:58:52Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2025-03-03T10:55:07Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Subsets and Splits