modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-15 06:29:46
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 426
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-15 06:29:46
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
CennetOguz/paligemma2_object_detection | CennetOguz | "2025-01-22T12:49:07Z" | 10 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"paligemma",
"image-text-to-text",
"generated_from_trainer",
"base_model:google/paligemma2-3b-pt-448",
"base_model:finetune:google/paligemma2-3b-pt-448",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2025-01-22T12:46:37Z" | ---
library_name: transformers
license: gemma
base_model: google/paligemma2-3b-pt-448
tags:
- generated_from_trainer
model-index:
- name: paligemma2_object_detection
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# paligemma2_object_detection
This model is a fine-tuned version of [google/paligemma2-3b-pt-448](https://huggingface.co/google/paligemma2-3b-pt-448) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use adamw_hf with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu124
- Tokenizers 0.21.0
|
memevas/DL35 | memevas | "2025-02-13T11:02:55Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-13T10:41:14Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Apmeiz/fumibabba | Apmeiz | "2023-04-24T10:39:00Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-04-24T10:37:31Z" | ---
license: creativeml-openrail-m
---
|
RichardLu/mistral7b_aspectsentiment_res | RichardLu | "2025-03-18T23:07:06Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-03-18T22:40:06Z" | ---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** RichardLu
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tmnam20/bert-base-multilingual-cased-vtoc-1 | tmnam20 | "2024-01-16T07:01:59Z" | 95 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-01-16T07:00:47Z" | ---
language:
- en
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
model-index:
- name: bert-base-multilingual-cased-vtoc-1
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/VTOC
type: tmnam20/VieGLUE
config: vtoc
split: validation
args: vtoc
metrics:
- name: Accuracy
type: accuracy
value: 0.8083014746040416
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-vtoc-1
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tmnam20/VieGLUE/VTOC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6734
- Accuracy: 0.8083
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4828 | 2.19 | 500 | 0.7023 | 0.8012 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.0.dev20231203+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
simrana5/RickBotExample | simrana5 | "2021-08-09T10:57:40Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
tags:
- conversational
---
# RickBot built for [Chai](https://chai.ml/)
Make your own [here](https://colab.research.google.com/drive/1LtVm-VHvDnfNy7SsbZAqhh49ikBwh1un?usp=sharing)
|
mradermacher/llama3-1-ox-llms-8b-sft-full-germany-data-GGUF | mradermacher | "2024-12-07T14:24:05Z" | 54 | 0 | transformers | [
"transformers",
"gguf",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"en",
"dataset:oxford-llms/ultrachat_filtered",
"dataset:oxford-llms/Magpie-Qwen2.5-Pro-1M-v0.1-filtered",
"dataset:oxford-llms/european_social_survey_2020_sft",
"dataset:oxford-llms/european_social_survey_2023_sft",
"dataset:oxford-llms/world_values_survey_2017_2022_sft",
"dataset:oxford-llms/european_social_survey_2023_germany_sft",
"base_model:IeBoytsov/llama3-1-ox-llms-8b-sft-full-germany-data",
"base_model:quantized:IeBoytsov/llama3-1-ox-llms-8b-sft-full-germany-data",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-12-07T13:32:22Z" | ---
base_model: IeBoytsov/llama3-1-ox-llms-8b-sft-full-germany-data
datasets:
- oxford-llms/ultrachat_filtered
- oxford-llms/Magpie-Qwen2.5-Pro-1M-v0.1-filtered
- oxford-llms/european_social_survey_2020_sft
- oxford-llms/european_social_survey_2023_sft
- oxford-llms/world_values_survey_2017_2022_sft
- oxford-llms/european_social_survey_2023_germany_sft
language:
- en
library_name: transformers
license: llama3.1
quantized_by: mradermacher
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/IeBoytsov/llama3-1-ox-llms-8b-sft-full-germany-data
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama3-1-ox-llms-8b-sft-full-germany-data-GGUF/resolve/main/llama3-1-ox-llms-8b-sft-full-germany-data.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/llama3-1-ox-llms-8b-sft-full-germany-data-GGUF/resolve/main/llama3-1-ox-llms-8b-sft-full-germany-data.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama3-1-ox-llms-8b-sft-full-germany-data-GGUF/resolve/main/llama3-1-ox-llms-8b-sft-full-germany-data.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama3-1-ox-llms-8b-sft-full-germany-data-GGUF/resolve/main/llama3-1-ox-llms-8b-sft-full-germany-data.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/llama3-1-ox-llms-8b-sft-full-germany-data-GGUF/resolve/main/llama3-1-ox-llms-8b-sft-full-germany-data.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama3-1-ox-llms-8b-sft-full-germany-data-GGUF/resolve/main/llama3-1-ox-llms-8b-sft-full-germany-data.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/llama3-1-ox-llms-8b-sft-full-germany-data-GGUF/resolve/main/llama3-1-ox-llms-8b-sft-full-germany-data.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama3-1-ox-llms-8b-sft-full-germany-data-GGUF/resolve/main/llama3-1-ox-llms-8b-sft-full-germany-data.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama3-1-ox-llms-8b-sft-full-germany-data-GGUF/resolve/main/llama3-1-ox-llms-8b-sft-full-germany-data.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama3-1-ox-llms-8b-sft-full-germany-data-GGUF/resolve/main/llama3-1-ox-llms-8b-sft-full-germany-data.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama3-1-ox-llms-8b-sft-full-germany-data-GGUF/resolve/main/llama3-1-ox-llms-8b-sft-full-germany-data.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/llama3-1-ox-llms-8b-sft-full-germany-data-GGUF/resolve/main/llama3-1-ox-llms-8b-sft-full-germany-data.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/llama3-1-ox-llms-8b-sft-full-germany-data-GGUF/resolve/main/llama3-1-ox-llms-8b-sft-full-germany-data.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
OPEA/DeepSeek-V3-int4-sym-gguf-q4-0-inc | OPEA | "2025-02-12T08:19:54Z" | 1,987 | 2 | null | [
"gguf",
"dataset:NeelNanda/pile-10k",
"arxiv:2309.05516",
"base_model:deepseek-ai/DeepSeek-V3",
"base_model:quantized:deepseek-ai/DeepSeek-V3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-24T00:02:39Z" | ---
datasets:
- NeelNanda/pile-10k
base_model:
- deepseek-ai/DeepSeek-V3
---
## Model Details
This gguf model is an int4 model with group_size 32 and symmetric quantization of [deepseek-ai/DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3) generated by [intel/auto-round](https://github.com/intel/auto-round).
## How To Use
### Requirements
Please follow the [Build llama.cpp locally](https://github.com/ggerganov/llama.cpp/blob/master/docs/build.md) to install the necessary dependencies.
### INT4 Inference
```bash
>>> text="9.11和9.8哪个数字大"
>>> ./llama-cli -m DeepSeek-V3-bf16-256x20B-Q4_0.gguf-00001-of-00009.gguf -p "<|begin▁of▁sentence|>You are a helpful assistant.<|User|>$text\n<|Assistant|>" -n 512 --threads 16 -no-cnv
## Generated:
## 要比较 **9.11** 和 **9.8** 的大小,可以将它们转化为小数形式以便比较。
## 1. **9.11** 已经是小数形式。
## 2. **9.8** 可以表示为 **9.80**。
## 现在比较小数点后的数字:
## - **9.11** 的小数部分是 **0.11**
## - **9.80** 的小数部分是 **0.80**
## 因为 **0.80** > **0.11**,所以 **9.8** 大于 **9.11**。
## 最终答案是:
## \boxed{9.8} [end of text]
>>> text="strawberry中有几个r?"
>>> ./llama-cli -m DeepSeek-V3-bf16-256x20B-Q4_0.gguf-00001-of-00009.gguf -p "<|begin▁of▁sentence|>You are a helpful assistant.<|User|>$text\n<|Assistant|>" -n 512 --threads 16 -no-cnv
## Generated:
## The word "strawberry" contains two 'r' characters. Here's the breakdown:
## - **S**
## - **T**
## - **R**
## - **A**
## - **W**
## - **B**
## - **E**
## - **R**
## - **R**
## - **Y**
## So, there are **2** 'r' in "strawberry". [end of text]
>>> text="There is a girl who likes adventure,"
>>> ./llama-cli -m DeepSeek-V3-bf16-256x20B-Q4_0.gguf-00001-of-00009.gguf -p "<|begin▁of▁sentence|>You are a helpful assistant.<|User|>$text\n<|Assistant|>" -n 512 --threads 16 -no-cnv
## Generated:
## That’s great! Adventures can be thrilling and enriching experiences. Here are a few ideas to inspire her adventurous spirit:
## ### Outdoor Adventures:
## 1. **Hiking**: Explore national parks or local trails to connect with nature.
## 2. **Camping**: Spend a night under the stars or in a forest.
## 3. **Rock Climbing**: Challenge yourself with cliffs or indoor climbing walls.
## 4. **Kayaking or Canoeing**: Explore rivers, lakes, or even the ocean.
## ### Travel Adventures:
## 5. **Backpacking**: Travel to new countries or regions with minimal luggage.
## 6. **Road Trips**: Explore nearby towns or cities by driving or biking.
## 7. **Volunteering Abroad**: Combine adventure with helping others in foreign countries.
##
## ### Thrilling Activities:
## 8. **Skydiving**: Experience the thrill of free-falling.
## 9. **Scuba Diving**: Discover underwater worlds and marine life.
## 10. **Zip-lining**: Feel the rush of flying through the air.
##
## ### Creative Adventures:
## 11. **Urban Exploration**: Discover hidden gems in your city or town.
## 12. **Photography Expeditions**: Capture unique landscapes or cultures.
## 13. **Learning Something New**: Try a hobby like surfing, pottery, or archery.
##
## ### Nature Adventures:
## 14. **Wildlife Safaris**: Observe animals in their natural habitats.
## 15. **Forest Bathing**: Immerse yourself in nature for relaxation and mindfulness.
## 16. **Gardening**: Explore growing your own plants or creating a garden.
##
## ### Cultural Adventures:
## 17. **Festivals**: Attend cultural events to learn about traditions.
## 18. **Historical Sites**: Visit museums, ruins, or ancient landmarks.
## 19. **Language Learning**: Learn a new language and immerse yourself in its culture.
##
## No matter the adventure, it’s important to stay safe, prepared, and open-minded. Adventure is about exploring, learning, and embracing the unknown! 🌟 [end of text]
>>> text="Please give a brief introduction of DeepSeek company."
>>> ./llama-cli -m DeepSeek-V3-bf16-256x20B-Q4_0.gguf-00001-of-00009.gguf -p "<|begin▁of▁sentence|>You are a helpful assistant.<|User|>$text\n<|Assistant|>" -n 512 --threads 16 -no-cnv
## Generated:
## DeepSeek is a Chinese company specializing in artificial intelligence (AI) technologies and applications. Founded in 2023, DeepSeek focuses on developing advanced AI solutions for various industries, including finance, healthcare, education, and entertainment. The company emphasizes innovation in natural language processing (NLP), machine learning, and data analytics to create intelligent systems that enhance decision-making and efficiency. DeepSeek aims to bridge the gap between cutting-edge AI research and practical applications, contributing to technological advancements and digital transformation across sectors. [end of text]
```
### Generate the model
**5*80G gpu is needed(could optimize), 1.4T cpu memory is needed**
**1 add meta data to bf16 model** https://huggingface.co/opensourcerelease/DeepSeek-V3-bf16
```python
import safetensors
from safetensors.torch import save_file
for i in range(1, 164):
idx_str = "0" * (5-len(str(i))) + str(i)
safetensors_path = f"model-{idx_str}-of-000163.safetensors"
print(safetensors_path)
tensors = dict()
with safetensors.safe_open(safetensors_path, framework="pt") as f:
for key in f.keys():
tensors[key] = f.get_tensor(key)
save_file(tensors, safetensors_path, metadata={'format': 'pt'})
```
**2 replace the modeling_deepseek.py with the following file**, basically align device and remove torch.no_grad as we need some tuning in AutoRound.
https://github.com/intel/auto-round/blob/deepseekv3/modeling_deepseek.py
pip3 install git+https://github.com/intel/auto-round.git
**3 tuning**
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = DeepSeek-V3-hf
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True, torch_dtype="auto")
block = model.model.layers
device_map = {}
for n, m in block.named_modules():
if isinstance(m, (torch.nn.Linear, transformers.modeling_utils.Conv1D)):
if "experts" in n and ("shared_experts" not in n) and int(n.split('.')[-2])<63 and "down_proj" not in n :
device ="cuda:1"
elif "experts" in n and ("shared_experts" not in n) and "down_proj" in n and int(n.split('.')[-2])<63:
device = "cuda:1"
elif "experts" in n and ("shared_experts" not in n) and int(n.split('.')[-2]) >= 63 and int(n.split('.')[-2]) < 128 and "down_proj" not in n:
device = "cuda:2"
elif "experts" in n and ("shared_experts" not in n) and "down_proj" in n and int(n.split('.')[-2]) >= 63 and int(n.split('.')[-2]) < 128:
device = "cuda:2"
elif "experts" in n and ("shared_experts" not in n) and int(n.split('.')[-2]) >= 128 and int(
n.split('.')[-2]) < 192 and "down_proj" not in n:
device = "cuda:3"
elif "experts" in n and ("shared_experts" not in n) and "down_proj" in n and int(
n.split('.')[-2]) >= 128 and int(n.split('.')[-2]) < 192:
device = "cuda:3"
elif "experts" in n and ("shared_experts" not in n) and "down_proj" not in n and int(
n.split('.')[-2]) >= 192:
device = "cuda:4"
elif "experts" in n and ("shared_experts" not in n) and "down_proj" in n and int(
n.split('.')[-2]) >= 192:
device = "cuda:4"
else:
device = "cuda:0"
n = n[2:]
device_map.update({n: device})
from auto_round import AutoRound
autoround = AutoRound(model=model, tokenizer=tokenizer, device_map=device_map,
iters=200,batch_size=8, seqlen=512, enable_torch_compile=False)
autoround.quantize()
autoround.save_quantized(format="gguf:q4_0", output_dir="tmp_autoround")
```
## Ethical Considerations and Limitations
The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
Therefore, before deploying any applications of the model, developers should perform safety testing.
## Caveats and Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Here are a couple of useful links to learn more about Intel's AI software:
- Intel Neural Compressor [link](https://github.com/intel/neural-compressor)
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
## Cite
@article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} }
[arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round) |
zhmi0909/task-1-Qwen-Qwen1.5-1.8B | zhmi0909 | "2025-01-07T11:41:13Z" | 73 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-1.8B",
"base_model:adapter:Qwen/Qwen1.5-1.8B",
"region:us"
] | null | "2025-01-02T09:37:35Z" | ---
base_model: Qwen/Qwen1.5-1.8B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
davidschulte/ESM_per_sent_default | davidschulte | "2025-03-26T15:21:48Z" | 18 | 0 | null | [
"safetensors",
"embedding_space_map",
"BaseLM:bert-base-multilingual-uncased",
"dataset:community-datasets/per_sent",
"base_model:google-bert/bert-base-multilingual-uncased",
"base_model:finetune:google-bert/bert-base-multilingual-uncased",
"license:apache-2.0",
"region:us"
] | null | "2024-12-08T14:39:52Z" | ---
base_model: bert-base-multilingual-uncased
datasets:
- community-datasets/per_sent
license: apache-2.0
tags:
- embedding_space_map
- BaseLM:bert-base-multilingual-uncased
---
# ESM community-datasets/per_sent
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
ESM
- **Developed by:** David Schulte
- **Model type:** ESM
- **Base Model:** bert-base-multilingual-uncased
- **Intermediate Task:** community-datasets/per_sent
- **ESM architecture:** linear
- **ESM embedding dimension:** 768
- **Language(s) (NLP):** [More Information Needed]
- **License:** Apache-2.0 license
- **ESM version:** 0.1.0
## Training Details
### Intermediate Task
- **Task ID:** community-datasets/per_sent
- **Subset [optional]:** default
- **Text Column:** DOCUMENT
- **Label Column:** TRUE_SENTIMENT
- **Dataset Split:** train
- **Sample size [optional]:** 3355
- **Sample seed [optional]:**
### Training Procedure [optional]
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Language Model Training Hyperparameters [optional]
- **Epochs:** 3
- **Batch size:** 32
- **Learning rate:** 2e-05
- **Weight Decay:** 0.01
- **Optimizer**: AdamW
### ESM Training Hyperparameters [optional]
- **Epochs:** 10
- **Batch size:** 32
- **Learning rate:** 0.001
- **Weight Decay:** 0.01
- **Optimizer**: AdamW
### Additional trainiung details [optional]
## Model evaluation
### Evaluation of fine-tuned language model [optional]
### Evaluation of ESM [optional]
MSE:
### Additional evaluation details [optional]
## What are Embedding Space Maps used for?
Embedding Space Maps are a part of ESM-LogME, a efficient method for finding intermediate datasets for transfer learning. There are two reasons to use ESM-LogME:
### You don't have enough training data for your problem
If you don't have a enough training data for your problem, just use ESM-LogME to find more.
You can supplement model training by including publicly available datasets in the training process.
1. Fine-tune a language model on suitable intermediate dataset.
2. Fine-tune the resulting model on your target dataset.
This workflow is called intermediate task transfer learning and it can significantly improve the target performance.
But what is a suitable dataset for your problem? ESM-LogME enable you to quickly rank thousands of datasets on the Hugging Face Hub by how well they are exptected to transfer to your target task.
### You want to find similar datasets to your target dataset
Using ESM-LogME can be used like search engine on the Hugging Face Hub. You can find similar tasks to your target task without having to rely on heuristics. ESM-LogME estimates how language models fine-tuned on each intermediate task would benefinit your target task. This quantitative approach combines the effects of domain similarity and task similarity.
## How can I use ESM-LogME / ESMs?
[](https://pypi.org/project/hf-dataset-selector)
We release **hf-dataset-selector**, a Python package for intermediate task selection using Embedding Space Maps.
**hf-dataset-selector** fetches ESMs for a given language model and uses it to find the best dataset for applying intermediate training to the target task. ESMs are found by their tags on the Huggingface Hub.
```python
from hfselect import Dataset, compute_task_ranking
# Load target dataset from the Hugging Face Hub
dataset = Dataset.from_hugging_face(
name="stanfordnlp/imdb",
split="train",
text_col="text",
label_col="label",
is_regression=False,
num_examples=1000,
seed=42
)
# Fetch ESMs and rank tasks
task_ranking = compute_task_ranking(
dataset=dataset,
model_name="bert-base-multilingual-uncased"
)
# Display top 5 recommendations
print(task_ranking[:5])
```
```python
1. davanstrien/test_imdb_embedd2 Score: -0.618529
2. davanstrien/test_imdb_embedd Score: -0.618644
3. davanstrien/test1 Score: -0.619334
4. stanfordnlp/imdb Score: -0.619454
5. stanfordnlp/sst Score: -0.62995
```
| Rank | Task ID | Task Subset | Text Column | Label Column | Task Split | Num Examples | ESM Architecture | Score |
|-------:|:------------------------------|:----------------|:--------------|:---------------|:-------------|---------------:|:-------------------|----------:|
| 1 | davanstrien/test_imdb_embedd2 | default | text | label | train | 10000 | linear | -0.618529 |
| 2 | davanstrien/test_imdb_embedd | default | text | label | train | 10000 | linear | -0.618644 |
| 3 | davanstrien/test1 | default | text | label | train | 10000 | linear | -0.619334 |
| 4 | stanfordnlp/imdb | plain_text | text | label | train | 10000 | linear | -0.619454 |
| 5 | stanfordnlp/sst | dictionary | phrase | label | dictionary | 10000 | linear | -0.62995 |
| 6 | stanfordnlp/sst | default | sentence | label | train | 8544 | linear | -0.63312 |
| 7 | kuroneko5943/snap21 | CDs_and_Vinyl_5 | sentence | label | train | 6974 | linear | -0.634365 |
| 8 | kuroneko5943/snap21 | Video_Games_5 | sentence | label | train | 6997 | linear | -0.638787 |
| 9 | kuroneko5943/snap21 | Movies_and_TV_5 | sentence | label | train | 6989 | linear | -0.639068 |
| 10 | fancyzhx/amazon_polarity | amazon_polarity | content | label | train | 10000 | linear | -0.639718 |
For more information on how to use ESMs please have a look at the [official Github repository](https://github.com/davidschulte/hf-dataset-selector). We provide documentation further documentation and tutorials for finding intermediate datasets and training your own ESMs.
## How do Embedding Space Maps work?
<!-- This section describes the evaluation protocols and provides the results. -->
Embedding Space Maps (ESMs) are neural networks that approximate the effect of fine-tuning a language model on a task. They can be used to quickly transform embeddings from a base model to approximate how a fine-tuned model would embed the the input text.
ESMs can be used for intermediate task selection with the ESM-LogME workflow.
## How can I use Embedding Space Maps for Intermediate Task Selection?
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
If you are using this Embedding Space Maps, please cite our [paper](https://aclanthology.org/2024.emnlp-main.529/).
**BibTeX:**
```
@inproceedings{schulte-etal-2024-less,
title = "Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning",
author = "Schulte, David and
Hamborg, Felix and
Akbik, Alan",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.529/",
doi = "10.18653/v1/2024.emnlp-main.529",
pages = "9431--9442",
abstract = "Intermediate task transfer learning can greatly improve model performance. If, for example, one has little training data for emotion detection, first fine-tuning a language model on a sentiment classification dataset may improve performance strongly. But which task to choose for transfer learning? Prior methods producing useful task rankings are infeasible for large source pools, as they require forward passes through all source language models. We overcome this by introducing Embedding Space Maps (ESMs), light-weight neural networks that approximate the effect of fine-tuning a language model. We conduct the largest study on NLP task transferability and task selection with 12k source-target pairs. We find that applying ESMs on a prior method reduces execution time and disk space usage by factors of 10 and 278, respectively, while retaining high selection performance (avg. regret@5 score of 2.95)."
}
```
**APA:**
```
Schulte, D., Hamborg, F., & Akbik, A. (2024, November). Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (pp. 9431-9442).
```
## Additional Information
|
stablediffusionapi/colossus-project-xl-sfwns | stablediffusionapi | "2025-01-20T11:35:44Z" | 7 | 1 | diffusers | [
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2023-11-30T09:20:15Z" | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# Colossus Project XL (SFW&NSFW) API Inference

## Get API Key
Get API key from [ModelsLab](https://modelslab.com/), No Payment needed.
Replace Key in below code, change **model_id** to "colossus-project-xl-sfwns"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Try model for free: [Generate Images](https://stablediffusionapi.com/models/colossus-project-xl-sfwns)
Model link: [View model](https://stablediffusionapi.com/models/colossus-project-xl-sfwns)
Credits: [View credits](https://civitai.com/?query=Colossus%20Project%20XL%20%28SFW%26NSFW%29)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v4/dreambooth"
payload = json.dumps({
"key": "your_api_key",
"model_id": "colossus-project-xl-sfwns",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-3_0bpw_exl2 | Zoyd | "2024-05-28T16:13:37Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dpo",
"dataset:mlabonne/orpo-dpo-mix-40k",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"exl2",
"region:us"
] | text-generation | "2024-05-28T15:16:51Z" | ---
license: other
datasets:
- mlabonne/orpo-dpo-mix-40k
tags:
- dpo
---
**Exllamav2** quant (**exl2** / **3.0 bpw**) made with ExLlamaV2 v0.1.1
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-2_2bpw_exl2)**</center> | <center>3250 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-2_5bpw_exl2)**</center> | <center>3479 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-3_0bpw_exl2)**</center> | <center>3895 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-3_5bpw_exl2)**</center> | <center>4310 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-3_75bpw_exl2)**</center> | <center>4519 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-4_0bpw_exl2)**</center> | <center>4727 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-4_25bpw_exl2)**</center> | <center>4931 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-5_0bpw_exl2)**</center> | <center>5559 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-6_0bpw_exl2)**</center> | <center>6495 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-6_5bpw_exl2)**</center> | <center>6903 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-8_0bpw_exl2)**</center> | <center>8157 MB</center> | <center>8</center> |
# NeuralDaredevil-8B-abliterated

This is a DPO fine-tune of [mlabonne/Daredevil-8-abliterated](https://huggingface.co/mlabonne/Daredevil-8B-abliterated) trained on one epoch of [mlabonne/orpo-dpo-mix-40k](https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k).
## 🏆 Evaluation
### Open LLM Leaderboard
TBD.
### Nous
Evaluation performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval). See the entire leaderboard [here](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard).
| Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
|---|---:|---:|---:|---:|---:|
| [**mlabonne/NeuralDaredevil-8B-abliterated**](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated) [📄](https://gist.github.com/mlabonne/ae0bf16936cef900b72964b33c99edbc) | **55.87** | **43.73** | **73.6** | **59.36** | **46.8** |
| [mlabonne/Daredevil-8B](https://huggingface.co/mlabonne/Daredevil-8B) [📄](https://gist.github.com/mlabonne/080f9c5f153ea57a7ab7d932cf896f21) | 55.87 | 44.13 | 73.52 | 59.05 | 46.77 |
| [mlabonne/Daredevil-8B-abliterated](https://huggingface.co/mlabonne/Daredevil-8B-abliterated) [📄](https://gist.github.com/mlabonne/32cdd8460804662c856bcb2a20acd49e) | 55.06 | 43.29 | 73.33 | 57.47 | 46.17 |
| [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B) [📄](https://gist.github.com/mlabonne/5df2a3051dd6eb3368a77b684635dc05) | 54.28 | 43.9 | 72.62 | 56.36 | 44.23 |
| [openchat/openchat-3.6-8b-20240522](https://huggingface.co/openchat/openchat-3.6-8b-20240522) [📄](https://gist.github.com/mlabonne/95eef8e8d26b7b17910dcb78e1c95f4a) | 53.49 | 44.03 | 73.67 | 49.78 | 46.48 |
| [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) [📄](https://gist.github.com/mlabonne/8329284d86035e6019edb11eb0933628) | 51.34 | 41.22 | 69.86 | 51.65 | 42.64 |
| [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) [📄](https://gist.github.com/mlabonne/616b6245137a9cfc4ea80e4c6e55d847) | 45.42 | 31.1 | 69.95 | 43.91 | 36.7 |
## 🌳 Model family tree
 |
ngugi1/taxi-v1 | ngugi1 | "2024-02-05T14:54:49Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-02-05T14:54:06Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ngugi1/taxi-v1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
xuykin/ex-de | xuykin | "2024-01-29T16:07:34Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-01-29T15:43:17Z" | ---
license: creativeml-openrail-m
---
|
sudocoder/Qwen2-0.5B-GRPO-test | sudocoder | "2025-03-21T21:32:59Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"dataset:AI-MO/NuminaMath-TIR",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | "2025-03-21T19:54:01Z" | ---
base_model: Qwen/Qwen2-0.5B-Instruct
datasets: AI-MO/NuminaMath-TIR
library_name: transformers
model_name: Qwen2-0.5B-GRPO-test
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen2-0.5B-GRPO-test
This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the [AI-MO/NuminaMath-TIR](https://huggingface.co/datasets/AI-MO/NuminaMath-TIR) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sudocoder/Qwen2-0.5B-GRPO-test", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.49.0
- Pytorch: 2.6.0+cu124
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
jonatasgrosman/exp_w2v2r_es_xls-r_accent_surpeninsular-10_nortepeninsular-0_s61 | jonatasgrosman | "2022-07-25T18:12:28Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-07-25T18:12:16Z" | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_es_xls-r_accent_surpeninsular-10_nortepeninsular-0_s61
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
100rab25/spa_images_classifier_jd_v1_convnext | 100rab25 | "2024-03-20T07:14:49Z" | 262 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-03-20T06:49:46Z" | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: spa_images_classifier_jd_v1_convnext
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.978066110596231
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spa_images_classifier_jd_v1_convnext
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0662
- Accuracy: 0.9781
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2494 | 1.0 | 227 | 0.1194 | 0.9555 |
| 0.2333 | 2.0 | 455 | 0.1008 | 0.9635 |
| 0.1977 | 3.0 | 683 | 0.0855 | 0.9703 |
| 0.1405 | 4.0 | 911 | 0.0792 | 0.9744 |
| 0.1575 | 5.0 | 1138 | 0.0734 | 0.9731 |
| 0.0948 | 6.0 | 1366 | 0.0666 | 0.9778 |
| 0.1049 | 7.0 | 1594 | 0.0662 | 0.9781 |
| 0.0928 | 8.0 | 1822 | 0.0693 | 0.9774 |
| 0.0903 | 9.0 | 2049 | 0.0704 | 0.9771 |
| 0.0759 | 9.97 | 2270 | 0.0652 | 0.9778 |
### Framework versions
- Transformers 4.35.0
- Pytorch 1.12.1+cu113
- Datasets 2.17.1
- Tokenizers 0.14.1
|
mradermacher/BigLiberated-20B-V2-GGUF | mradermacher | "2024-05-06T05:29:57Z" | 4 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:win10/BigLiberated-20B-V2",
"base_model:quantized:win10/BigLiberated-20B-V2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-03-31T15:47:35Z" | ---
base_model: win10/BigLiberated-20B-V2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
static quants of https://huggingface.co/win10/BigLiberated-20B-V2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/BigLiberated-20B-V2-GGUF/resolve/main/BigLiberated-20B-V2.Q2_K.gguf) | Q2_K | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/BigLiberated-20B-V2-GGUF/resolve/main/BigLiberated-20B-V2.IQ3_XS.gguf) | IQ3_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/BigLiberated-20B-V2-GGUF/resolve/main/BigLiberated-20B-V2.IQ3_S.gguf) | IQ3_S | 10.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/BigLiberated-20B-V2-GGUF/resolve/main/BigLiberated-20B-V2.Q3_K_S.gguf) | Q3_K_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/BigLiberated-20B-V2-GGUF/resolve/main/BigLiberated-20B-V2.IQ3_M.gguf) | IQ3_M | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/BigLiberated-20B-V2-GGUF/resolve/main/BigLiberated-20B-V2.Q3_K_M.gguf) | Q3_K_M | 11.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/BigLiberated-20B-V2-GGUF/resolve/main/BigLiberated-20B-V2.Q3_K_L.gguf) | Q3_K_L | 12.1 | |
| [GGUF](https://huggingface.co/mradermacher/BigLiberated-20B-V2-GGUF/resolve/main/BigLiberated-20B-V2.IQ4_XS.gguf) | IQ4_XS | 12.2 | |
| [GGUF](https://huggingface.co/mradermacher/BigLiberated-20B-V2-GGUF/resolve/main/BigLiberated-20B-V2.Q4_0.gguf) | Q4_0 | 12.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/BigLiberated-20B-V2-GGUF/resolve/main/BigLiberated-20B-V2.Q4_K_S.gguf) | Q4_K_S | 13.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BigLiberated-20B-V2-GGUF/resolve/main/BigLiberated-20B-V2.Q4_K_M.gguf) | Q4_K_M | 14.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BigLiberated-20B-V2-GGUF/resolve/main/BigLiberated-20B-V2.Q5_K_S.gguf) | Q5_K_S | 15.2 | |
| [GGUF](https://huggingface.co/mradermacher/BigLiberated-20B-V2-GGUF/resolve/main/BigLiberated-20B-V2.Q5_K_M.gguf) | Q5_K_M | 16.0 | |
| [GGUF](https://huggingface.co/mradermacher/BigLiberated-20B-V2-GGUF/resolve/main/BigLiberated-20B-V2.Q6_K.gguf) | Q6_K | 18.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/BigLiberated-20B-V2-GGUF/resolve/main/BigLiberated-20B-V2.Q8_0.gguf) | Q8_0 | 22.3 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
phongtintruong/meomeo-mhubert-vietbud-k-200 | phongtintruong | "2025-02-05T06:37:32Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"meomeo",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-02-05T06:37:22Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/falcon-7b-instruct-GGUF | mradermacher | "2025-02-14T00:45:13Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:tiiuae/falcon-refinedweb",
"base_model:tiiuae/falcon-7b-instruct",
"base_model:quantized:tiiuae/falcon-7b-instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-14T00:20:30Z" | ---
base_model: tiiuae/falcon-7b-instruct
datasets:
- tiiuae/falcon-refinedweb
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/tiiuae/falcon-7b-instruct
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/falcon-7b-instruct-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/falcon-7b-instruct-GGUF/resolve/main/falcon-7b-instruct.Q2_K.gguf) | Q2_K | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/falcon-7b-instruct-GGUF/resolve/main/falcon-7b-instruct.Q3_K_S.gguf) | Q3_K_S | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/falcon-7b-instruct-GGUF/resolve/main/falcon-7b-instruct.Q3_K_M.gguf) | Q3_K_M | 4.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/falcon-7b-instruct-GGUF/resolve/main/falcon-7b-instruct.IQ4_XS.gguf) | IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/falcon-7b-instruct-GGUF/resolve/main/falcon-7b-instruct.Q3_K_L.gguf) | Q3_K_L | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/falcon-7b-instruct-GGUF/resolve/main/falcon-7b-instruct.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/falcon-7b-instruct-GGUF/resolve/main/falcon-7b-instruct.Q4_K_M.gguf) | Q4_K_M | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/falcon-7b-instruct-GGUF/resolve/main/falcon-7b-instruct.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/falcon-7b-instruct-GGUF/resolve/main/falcon-7b-instruct.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/falcon-7b-instruct-GGUF/resolve/main/falcon-7b-instruct.Q6_K.gguf) | Q6_K | 7.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/falcon-7b-instruct-GGUF/resolve/main/falcon-7b-instruct.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/falcon-7b-instruct-GGUF/resolve/main/falcon-7b-instruct.f16.gguf) | f16 | 14.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
lesso/a9f95564-8108-4018-b0a8-c1519cc5a6bb | lesso | "2025-02-09T00:18:33Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/tinyllama-chat",
"base_model:adapter:unsloth/tinyllama-chat",
"license:apache-2.0",
"region:us"
] | null | "2025-02-07T21:42:16Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/tinyllama-chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a9f95564-8108-4018-b0a8-c1519cc5a6bb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# a9f95564-8108-4018-b0a8-c1519cc5a6bb
This model is a fine-tuned version of [unsloth/tinyllama-chat](https://huggingface.co/unsloth/tinyllama-chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1068
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000203
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 50
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0051 | 1 | 2.7134 |
| 0.2963 | 0.2528 | 50 | 0.1043 |
| 0.1056 | 0.5057 | 100 | 0.1043 |
| 0.105 | 0.7585 | 150 | 0.1033 |
| 0.1054 | 1.0126 | 200 | 0.1043 |
| 0.1045 | 1.2655 | 250 | 0.1060 |
| 0.1043 | 1.5183 | 300 | 0.1068 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Jasmin0600/HuggyTheDog | Jasmin0600 | "2023-03-17T02:20:19Z" | 13 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | "2023-03-17T02:19:59Z" | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: Jasmin0600/HuggyTheDog
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
EMBO/sd-smallmol-roles-v2 | EMBO | "2023-01-25T13:13:17Z" | 10 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:source_data_nlp",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2022-08-10T16:28:27Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- source_data_nlp
metrics:
- precision
- recall
- f1
model-index:
- name: sd-smallmol-roles-v2
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: source_data_nlp
type: source_data_nlp
args: SMALL_MOL_ROLES
metrics:
- name: Precision
type: precision
value: 0.9628394473558838
- name: Recall
type: recall
value: 0.9716346153846154
- name: F1
type: f1
value: 0.9672170375687963
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sd-smallmol-roles-v2
This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-large](https://huggingface.co/michiyasunaga/BioLinkBERT-large) on the source_data_nlp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0015
- Accuracy Score: 0.9995
- Precision: 0.9628
- Recall: 0.9716
- F1: 0.9672
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 256
- seed: 42
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy Score | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:---------:|:------:|:------:|
| 0.0013 | 1.0 | 1569 | 0.0015 | 0.9995 | 0.9628 | 0.9716 | 0.9672 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0a0+bfe5ad2
- Datasets 1.17.0
- Tokenizers 0.12.1
|
LoneStriker/Smaugv0.1-6.0bpw-h6-exl2 | LoneStriker | "2024-01-26T06:30:33Z" | 6 | 4 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-01-26T06:19:10Z" | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
---

This model is a finetune of jondurbin's excellent [bagel](https://huggingface.co/jondurbin/bagel-34b-v0.2) model.
It has been trained with new datasets and a new technique, which we will share to the community soon. |
hgnoi/0XebiVLSh5SuD3J2 | hgnoi | "2024-05-26T18:06:40Z" | 141 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-26T18:05:01Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
automerger/MistrollPercival_01-7B | automerger | "2024-04-30T07:11:19Z" | 0 | 0 | null | [
"merge",
"mergekit",
"lazymergekit",
"automerger",
"license:apache-2.0",
"region:us"
] | null | "2024-04-30T07:11:19Z" | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- automerger
---
# MistrollPercival_01-7B
MistrollPercival_01-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
- model: BarraHome/Mistroll-7B-v2.2
- model: AurelPx/Percival_01-7b-slerp
merge_method: model_stock
base_model: mistralai/Mistral-7B-v0.1
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/MistrollPercival_01-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Forkits/q-FrozenLake-v1-4x4-no-slippery | Forkits | "2022-05-22T00:58:04Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2022-05-22T00:51:52Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-no-slippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Forkits/q-FrozenLake-v1-4x4-no-slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
DatTran0509/Finetune_BartPho_QA | DatTran0509 | "2025-04-03T13:42:04Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-03T13:42:04Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
cuongdz01/layoutlm-cord-2 | cuongdz01 | "2024-01-02T09:16:39Z" | 17 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"layoutlm",
"token-classification",
"generated_from_trainer",
"base_model:microsoft/layoutlm-base-uncased",
"base_model:finetune:microsoft/layoutlm-base-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-01-02T08:36:20Z" | ---
license: mit
base_model: microsoft/layoutlm-base-uncased
tags:
- generated_from_trainer
model-index:
- name: layoutlm-cord-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlm-cord-2
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1775
- Enu.cnt: {'precision': 0.981651376146789, 'recall': 0.9727272727272728, 'f1': 0.9771689497716896, 'number': 220}
- Enu.discountprice: {'precision': 0.7, 'recall': 0.7, 'f1': 0.7, 'number': 10}
- Enu.etc: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3}
- Enu.itemsubtotal: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6}
- Enu.nm: {'precision': 0.918918918918919, 'recall': 0.9482071713147411, 'f1': 0.9333333333333333, 'number': 251}
- Enu.num: {'precision': 0.8888888888888888, 'recall': 0.7272727272727273, 'f1': 0.7999999999999999, 'number': 11}
- Enu.price: {'precision': 0.9606299212598425, 'recall': 0.991869918699187, 'f1': 0.976, 'number': 246}
- Enu.sub.cnt: {'precision': 0.8947368421052632, 'recall': 1.0, 'f1': 0.9444444444444444, 'number': 17}
- Enu.sub.nm: {'precision': 0.7027027027027027, 'recall': 0.8387096774193549, 'f1': 0.7647058823529411, 'number': 31}
- Enu.sub.price: {'precision': 0.9473684210526315, 'recall': 0.9, 'f1': 0.9230769230769231, 'number': 20}
- Enu.unitprice: {'precision': 0.9692307692307692, 'recall': 0.9402985074626866, 'f1': 0.9545454545454547, 'number': 67}
- Otal.cashprice: {'precision': 0.984375, 'recall': 0.9264705882352942, 'f1': 0.9545454545454545, 'number': 68}
- Otal.changeprice: {'precision': 0.9649122807017544, 'recall': 0.9821428571428571, 'f1': 0.9734513274336283, 'number': 56}
- Otal.creditcardprice: {'precision': 0.8823529411764706, 'recall': 0.9375, 'f1': 0.9090909090909091, 'number': 16}
- Otal.emoneyprice: {'precision': 0.5, 'recall': 0.5, 'f1': 0.5, 'number': 2}
- Otal.menuqty Cnt: {'precision': 0.8666666666666667, 'recall': 0.896551724137931, 'f1': 0.8813559322033899, 'number': 29}
- Otal.menutype Cnt: {'precision': 0.6666666666666666, 'recall': 0.5714285714285714, 'f1': 0.6153846153846153, 'number': 7}
- Otal.total Etc: {'precision': 0.3333333333333333, 'recall': 0.3333333333333333, 'f1': 0.3333333333333333, 'number': 3}
- Otal.total Price: {'precision': 0.9387755102040817, 'recall': 0.968421052631579, 'f1': 0.9533678756476685, 'number': 95}
- Ub Total.discount Price: {'precision': 0.8571428571428571, 'recall': 0.8571428571428571, 'f1': 0.8571428571428571, 'number': 7}
- Ub Total.etc: {'precision': 0.7777777777777778, 'recall': 0.7777777777777778, 'f1': 0.7777777777777778, 'number': 9}
- Ub Total.service Price: {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12}
- Ub Total.subtotal Price: {'precision': 0.9393939393939394, 'recall': 0.9538461538461539, 'f1': 0.9465648854961831, 'number': 65}
- Ub Total.tax Price: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43}
- Overall Precision: 0.9364
- Overall Recall: 0.9444
- Overall F1: 0.9404
- Overall Accuracy: 0.9609
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Enu.cnt | Enu.discountprice | Enu.etc | Enu.itemsubtotal | Enu.nm | Enu.num | Enu.price | Enu.sub.cnt | Enu.sub.nm | Enu.sub.price | Enu.unitprice | Otal.cashprice | Otal.changeprice | Otal.creditcardprice | Otal.emoneyprice | Otal.menuqty Cnt | Otal.menutype Cnt | Otal.total Etc | Otal.total Price | Ub Total.discount Price | Ub Total.etc | Ub Total.service Price | Ub Total.subtotal Price | Ub Total.tax Price | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------:|:---------------------------------------------------------:|:---------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 2.98 | 1.0 | 50 | 2.3672 | {'precision': 0.8323699421965318, 'recall': 0.6545454545454545, 'f1': 0.732824427480916, 'number': 220} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.22123893805309736, 'recall': 0.398406374501992, 'f1': 0.2844950213371266, 'number': 251} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 11} | {'precision': 0.43343653250773995, 'recall': 0.5691056910569106, 'f1': 0.49209138840070304, 'number': 246} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 17} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 31} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 20} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 67} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 68} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 56} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 16} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 29} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 7} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 95} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 7} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 9} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 12} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 43} | 0.4004 | 0.2968 | 0.3409 | 0.4065 |
| 2.0084 | 2.0 | 100 | 1.5671 | {'precision': 0.7938931297709924, 'recall': 0.9454545454545454, 'f1': 0.8630705394190872, 'number': 220} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.5615384615384615, 'recall': 0.8725099601593626, 'f1': 0.6833073322932918, 'number': 251} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 11} | {'precision': 0.4724770642201835, 'recall': 0.8373983739837398, 'f1': 0.6041055718475073, 'number': 246} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 17} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 31} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 20} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 67} | {'precision': 0.22429906542056074, 'recall': 0.35294117647058826, 'f1': 0.2742857142857143, 'number': 68} | {'precision': 0.20408163265306123, 'recall': 0.17857142857142858, 'f1': 0.19047619047619047, 'number': 56} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 16} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 29} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 7} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.2374429223744292, 'recall': 0.5473684210526316, 'f1': 0.33121019108280253, 'number': 95} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 7} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 9} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 12} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 43} | 0.4657 | 0.5556 | 0.5067 | 0.6471 |
| 1.4166 | 3.0 | 150 | 1.1110 | {'precision': 0.8228346456692913, 'recall': 0.95, 'f1': 0.8818565400843881, 'number': 220} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.7249190938511327, 'recall': 0.8924302788844621, 'f1': 0.8000000000000002, 'number': 251} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 11} | {'precision': 0.65625, 'recall': 0.8536585365853658, 'f1': 0.7420494699646644, 'number': 246} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 17} | {'precision': 0.75, 'recall': 0.0967741935483871, 'f1': 0.1714285714285714, 'number': 31} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 20} | {'precision': 1.0, 'recall': 0.08955223880597014, 'f1': 0.1643835616438356, 'number': 67} | {'precision': 0.4639175257731959, 'recall': 0.6617647058823529, 'f1': 0.5454545454545455, 'number': 68} | {'precision': 0.6666666666666666, 'recall': 0.75, 'f1': 0.7058823529411765, 'number': 56} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 16} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 29} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 7} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.3273809523809524, 'recall': 0.5789473684210527, 'f1': 0.41825095057034223, 'number': 95} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 7} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 9} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 12} | {'precision': 0.06666666666666667, 'recall': 0.07692307692307693, 'f1': 0.07142857142857144, 'number': 65} | {'precision': 0.24675324675324675, 'recall': 0.4418604651162791, 'f1': 0.31666666666666665, 'number': 43} | 0.5885 | 0.6321 | 0.6095 | 0.7355 |
| 1.0685 | 4.0 | 200 | 0.8304 | {'precision': 0.8421052631578947, 'recall': 0.9454545454545454, 'f1': 0.8907922912205567, 'number': 220} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.7945205479452054, 'recall': 0.9243027888446215, 'f1': 0.85451197053407, 'number': 251} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 11} | {'precision': 0.6905537459283387, 'recall': 0.8617886178861789, 'f1': 0.7667269439421338, 'number': 246} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 17} | {'precision': 0.375, 'recall': 0.0967741935483871, 'f1': 0.15384615384615383, 'number': 31} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 20} | {'precision': 0.9375, 'recall': 0.22388059701492538, 'f1': 0.3614457831325301, 'number': 67} | {'precision': 0.6506024096385542, 'recall': 0.7941176470588235, 'f1': 0.7152317880794702, 'number': 68} | {'precision': 0.7230769230769231, 'recall': 0.8392857142857143, 'f1': 0.7768595041322314, 'number': 56} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 16} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.17857142857142858, 'recall': 0.1724137931034483, 'f1': 0.17543859649122806, 'number': 29} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 7} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.5853658536585366, 'recall': 0.7578947368421053, 'f1': 0.6605504587155963, 'number': 95} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 7} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 9} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 12} | {'precision': 0.7083333333333334, 'recall': 0.7846153846153846, 'f1': 0.7445255474452555, 'number': 65} | {'precision': 0.23684210526315788, 'recall': 0.4186046511627907, 'f1': 0.3025210084033613, 'number': 43} | 0.6921 | 0.7087 | 0.7003 | 0.7955 |
| 0.8325 | 5.0 | 250 | 0.6508 | {'precision': 0.8455284552845529, 'recall': 0.9454545454545454, 'f1': 0.8927038626609443, 'number': 220} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.8076923076923077, 'recall': 0.9203187250996016, 'f1': 0.8603351955307262, 'number': 251} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 11} | {'precision': 0.8075601374570447, 'recall': 0.9552845528455285, 'f1': 0.8752327746741155, 'number': 246} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 17} | {'precision': 0.2727272727272727, 'recall': 0.0967741935483871, 'f1': 0.14285714285714285, 'number': 31} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 20} | {'precision': 0.8863636363636364, 'recall': 0.582089552238806, 'f1': 0.7027027027027027, 'number': 67} | {'precision': 0.7073170731707317, 'recall': 0.8529411764705882, 'f1': 0.7733333333333334, 'number': 68} | {'precision': 0.7205882352941176, 'recall': 0.875, 'f1': 0.7903225806451613, 'number': 56} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 16} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.5555555555555556, 'recall': 0.5172413793103449, 'f1': 0.5357142857142857, 'number': 29} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 7} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.8349514563106796, 'recall': 0.9052631578947369, 'f1': 0.8686868686868687, 'number': 95} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 7} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 9} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 12} | {'precision': 0.8484848484848485, 'recall': 0.8615384615384616, 'f1': 0.8549618320610687, 'number': 65} | {'precision': 0.3283582089552239, 'recall': 0.5116279069767442, 'f1': 0.39999999999999997, 'number': 43} | 0.7672 | 0.7743 | 0.7708 | 0.8329 |
| 0.6768 | 6.0 | 300 | 0.5356 | {'precision': 0.8524590163934426, 'recall': 0.9454545454545454, 'f1': 0.8965517241379309, 'number': 220} | {'precision': 1.0, 'recall': 0.3, 'f1': 0.4615384615384615, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.8220640569395018, 'recall': 0.9203187250996016, 'f1': 0.868421052631579, 'number': 251} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 11} | {'precision': 0.8669064748201439, 'recall': 0.9796747967479674, 'f1': 0.9198473282442748, 'number': 246} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 17} | {'precision': 0.4, 'recall': 0.12903225806451613, 'f1': 0.1951219512195122, 'number': 31} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 20} | {'precision': 0.9454545454545454, 'recall': 0.7761194029850746, 'f1': 0.8524590163934428, 'number': 67} | {'precision': 0.7763157894736842, 'recall': 0.8676470588235294, 'f1': 0.8194444444444444, 'number': 68} | {'precision': 0.7936507936507936, 'recall': 0.8928571428571429, 'f1': 0.8403361344537815, 'number': 56} | {'precision': 0.5454545454545454, 'recall': 0.375, 'f1': 0.4444444444444444, 'number': 16} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.6666666666666666, 'recall': 0.6896551724137931, 'f1': 0.6779661016949153, 'number': 29} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 7} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.8431372549019608, 'recall': 0.9052631578947369, 'f1': 0.8730964467005077, 'number': 95} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 7} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 9} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 12} | {'precision': 0.8656716417910447, 'recall': 0.8923076923076924, 'f1': 0.8787878787878788, 'number': 65} | {'precision': 0.38333333333333336, 'recall': 0.5348837209302325, 'f1': 0.4466019417475728, 'number': 43} | 0.8064 | 0.8045 | 0.8054 | 0.8559 |
| 0.5703 | 7.0 | 350 | 0.4634 | {'precision': 0.8536585365853658, 'recall': 0.9545454545454546, 'f1': 0.9012875536480687, 'number': 220} | {'precision': 0.5, 'recall': 0.3, 'f1': 0.37499999999999994, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.8392857142857143, 'recall': 0.9362549800796812, 'f1': 0.8851224105461393, 'number': 251} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 11} | {'precision': 0.8905109489051095, 'recall': 0.991869918699187, 'f1': 0.9384615384615385, 'number': 246} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 17} | {'precision': 0.36363636363636365, 'recall': 0.12903225806451613, 'f1': 0.19047619047619047, 'number': 31} | {'precision': 0.6666666666666666, 'recall': 0.1, 'f1': 0.1739130434782609, 'number': 20} | {'precision': 0.9672131147540983, 'recall': 0.8805970149253731, 'f1': 0.9218749999999999, 'number': 67} | {'precision': 0.8676470588235294, 'recall': 0.8676470588235294, 'f1': 0.8676470588235294, 'number': 68} | {'precision': 0.8387096774193549, 'recall': 0.9285714285714286, 'f1': 0.8813559322033899, 'number': 56} | {'precision': 0.5555555555555556, 'recall': 0.625, 'f1': 0.5882352941176471, 'number': 16} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.7142857142857143, 'recall': 0.6896551724137931, 'f1': 0.7017543859649122, 'number': 29} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 7} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.8787878787878788, 'recall': 0.9157894736842105, 'f1': 0.8969072164948454, 'number': 95} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 7} | {'precision': 0.3333333333333333, 'recall': 0.1111111111111111, 'f1': 0.16666666666666666, 'number': 9} | {'precision': 0.18181818181818182, 'recall': 0.16666666666666666, 'f1': 0.17391304347826086, 'number': 12} | {'precision': 0.8571428571428571, 'recall': 0.9230769230769231, 'f1': 0.888888888888889, 'number': 65} | {'precision': 0.48214285714285715, 'recall': 0.627906976744186, 'f1': 0.5454545454545454, 'number': 43} | 0.8295 | 0.8308 | 0.8301 | 0.8776 |
| 0.4767 | 8.0 | 400 | 0.4061 | {'precision': 0.8688524590163934, 'recall': 0.9636363636363636, 'f1': 0.9137931034482758, 'number': 220} | {'precision': 0.42857142857142855, 'recall': 0.3, 'f1': 0.3529411764705882, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.8375451263537906, 'recall': 0.9243027888446215, 'f1': 0.8787878787878788, 'number': 251} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 11} | {'precision': 0.9138576779026217, 'recall': 0.991869918699187, 'f1': 0.9512670565302145, 'number': 246} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 17} | {'precision': 0.375, 'recall': 0.1935483870967742, 'f1': 0.25531914893617014, 'number': 31} | {'precision': 1.0, 'recall': 0.35, 'f1': 0.5185185185185185, 'number': 20} | {'precision': 0.96875, 'recall': 0.9253731343283582, 'f1': 0.9465648854961832, 'number': 67} | {'precision': 0.9375, 'recall': 0.8823529411764706, 'f1': 0.9090909090909091, 'number': 68} | {'precision': 0.8833333333333333, 'recall': 0.9464285714285714, 'f1': 0.9137931034482758, 'number': 56} | {'precision': 0.7222222222222222, 'recall': 0.8125, 'f1': 0.7647058823529411, 'number': 16} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.7333333333333333, 'recall': 0.7586206896551724, 'f1': 0.7457627118644068, 'number': 29} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 7} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.9175257731958762, 'recall': 0.9368421052631579, 'f1': 0.9270833333333333, 'number': 95} | {'precision': 0.3333333333333333, 'recall': 0.14285714285714285, 'f1': 0.2, 'number': 7} | {'precision': 0.5, 'recall': 0.2222222222222222, 'f1': 0.30769230769230765, 'number': 9} | {'precision': 0.6923076923076923, 'recall': 0.75, 'f1': 0.7199999999999999, 'number': 12} | {'precision': 0.8840579710144928, 'recall': 0.9384615384615385, 'f1': 0.9104477611940298, 'number': 65} | {'precision': 0.6363636363636364, 'recall': 0.813953488372093, 'f1': 0.7142857142857143, 'number': 43} | 0.8579 | 0.8586 | 0.8582 | 0.8975 |
| 0.4203 | 9.0 | 450 | 0.3582 | {'precision': 0.8760330578512396, 'recall': 0.9636363636363636, 'f1': 0.9177489177489176, 'number': 220} | {'precision': 0.6666666666666666, 'recall': 0.6, 'f1': 0.631578947368421, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.8333333333333334, 'recall': 0.9163346613545816, 'f1': 0.8728652751423149, 'number': 251} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 11} | {'precision': 0.9176029962546817, 'recall': 0.9959349593495935, 'f1': 0.9551656920077973, 'number': 246} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 17} | {'precision': 0.38461538461538464, 'recall': 0.3225806451612903, 'f1': 0.3508771929824561, 'number': 31} | {'precision': 1.0, 'recall': 0.45, 'f1': 0.6206896551724138, 'number': 20} | {'precision': 0.96875, 'recall': 0.9253731343283582, 'f1': 0.9465648854961832, 'number': 67} | {'precision': 0.9384615384615385, 'recall': 0.8970588235294118, 'f1': 0.9172932330827067, 'number': 68} | {'precision': 0.8833333333333333, 'recall': 0.9464285714285714, 'f1': 0.9137931034482758, 'number': 56} | {'precision': 0.6842105263157895, 'recall': 0.8125, 'f1': 0.742857142857143, 'number': 16} | {'precision': 1.0, 'recall': 0.5, 'f1': 0.6666666666666666, 'number': 2} | {'precision': 0.7586206896551724, 'recall': 0.7586206896551724, 'f1': 0.7586206896551724, 'number': 29} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 7} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.9278350515463918, 'recall': 0.9473684210526315, 'f1': 0.9374999999999999, 'number': 95} | {'precision': 0.4, 'recall': 0.2857142857142857, 'f1': 0.3333333333333333, 'number': 7} | {'precision': 0.42857142857142855, 'recall': 0.3333333333333333, 'f1': 0.375, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9117647058823529, 'recall': 0.9538461538461539, 'f1': 0.9323308270676691, 'number': 65} | {'precision': 0.7959183673469388, 'recall': 0.9069767441860465, 'f1': 0.8478260869565216, 'number': 43} | 0.8668 | 0.8748 | 0.8708 | 0.9133 |
| 0.3626 | 10.0 | 500 | 0.3217 | {'precision': 0.8833333333333333, 'recall': 0.9636363636363636, 'f1': 0.9217391304347826, 'number': 220} | {'precision': 0.5, 'recall': 0.5, 'f1': 0.5, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.8523985239852399, 'recall': 0.9203187250996016, 'f1': 0.8850574712643678, 'number': 251} | {'precision': 1.0, 'recall': 0.18181818181818182, 'f1': 0.3076923076923077, 'number': 11} | {'precision': 0.9386973180076629, 'recall': 0.9959349593495935, 'f1': 0.9664694280078896, 'number': 246} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 17} | {'precision': 0.4482758620689655, 'recall': 0.41935483870967744, 'f1': 0.4333333333333333, 'number': 31} | {'precision': 1.0, 'recall': 0.7, 'f1': 0.8235294117647058, 'number': 20} | {'precision': 0.9841269841269841, 'recall': 0.9253731343283582, 'f1': 0.9538461538461538, 'number': 67} | {'precision': 0.9523809523809523, 'recall': 0.8823529411764706, 'f1': 0.916030534351145, 'number': 68} | {'precision': 0.896551724137931, 'recall': 0.9285714285714286, 'f1': 0.912280701754386, 'number': 56} | {'precision': 0.7647058823529411, 'recall': 0.8125, 'f1': 0.787878787878788, 'number': 16} | {'precision': 1.0, 'recall': 0.5, 'f1': 0.6666666666666666, 'number': 2} | {'precision': 0.696969696969697, 'recall': 0.7931034482758621, 'f1': 0.7419354838709677, 'number': 29} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 7} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.9278350515463918, 'recall': 0.9473684210526315, 'f1': 0.9374999999999999, 'number': 95} | {'precision': 0.8333333333333334, 'recall': 0.7142857142857143, 'f1': 0.7692307692307692, 'number': 7} | {'precision': 0.5714285714285714, 'recall': 0.4444444444444444, 'f1': 0.5, 'number': 9} | {'precision': 0.8571428571428571, 'recall': 1.0, 'f1': 0.923076923076923, 'number': 12} | {'precision': 0.8985507246376812, 'recall': 0.9538461538461539, 'f1': 0.9253731343283582, 'number': 65} | {'precision': 0.8478260869565217, 'recall': 0.9069767441860465, 'f1': 0.8764044943820224, 'number': 43} | 0.8801 | 0.8849 | 0.8825 | 0.9213 |
| 0.3241 | 11.0 | 550 | 0.2949 | {'precision': 0.8907563025210085, 'recall': 0.9636363636363636, 'f1': 0.9257641921397379, 'number': 220} | {'precision': 0.8888888888888888, 'recall': 0.8, 'f1': 0.8421052631578948, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.8597785977859779, 'recall': 0.9282868525896414, 'f1': 0.8927203065134101, 'number': 251} | {'precision': 1.0, 'recall': 0.36363636363636365, 'f1': 0.5333333333333333, 'number': 11} | {'precision': 0.9384615384615385, 'recall': 0.991869918699187, 'f1': 0.9644268774703556, 'number': 246} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 17} | {'precision': 0.5555555555555556, 'recall': 0.4838709677419355, 'f1': 0.5172413793103449, 'number': 31} | {'precision': 1.0, 'recall': 0.7, 'f1': 0.8235294117647058, 'number': 20} | {'precision': 0.9393939393939394, 'recall': 0.9253731343283582, 'f1': 0.9323308270676692, 'number': 67} | {'precision': 0.953125, 'recall': 0.8970588235294118, 'f1': 0.9242424242424244, 'number': 68} | {'precision': 0.9642857142857143, 'recall': 0.9642857142857143, 'f1': 0.9642857142857143, 'number': 56} | {'precision': 0.7777777777777778, 'recall': 0.875, 'f1': 0.823529411764706, 'number': 16} | {'precision': 1.0, 'recall': 0.5, 'f1': 0.6666666666666666, 'number': 2} | {'precision': 0.696969696969697, 'recall': 0.7931034482758621, 'f1': 0.7419354838709677, 'number': 29} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 7} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.9278350515463918, 'recall': 0.9473684210526315, 'f1': 0.9374999999999999, 'number': 95} | {'precision': 0.8333333333333334, 'recall': 0.7142857142857143, 'f1': 0.7692307692307692, 'number': 7} | {'precision': 0.75, 'recall': 0.6666666666666666, 'f1': 0.7058823529411765, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9402985074626866, 'recall': 0.9692307692307692, 'f1': 0.9545454545454547, 'number': 65} | {'precision': 0.9767441860465116, 'recall': 0.9767441860465116, 'f1': 0.9767441860465116, 'number': 43} | 0.8981 | 0.8988 | 0.8984 | 0.9281 |
| 0.2899 | 12.0 | 600 | 0.2722 | {'precision': 0.902127659574468, 'recall': 0.9636363636363636, 'f1': 0.9318681318681319, 'number': 220} | {'precision': 0.9, 'recall': 0.9, 'f1': 0.9, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.8768656716417911, 'recall': 0.9362549800796812, 'f1': 0.9055876685934489, 'number': 251} | {'precision': 1.0, 'recall': 0.6363636363636364, 'f1': 0.7777777777777778, 'number': 11} | {'precision': 0.9420849420849421, 'recall': 0.991869918699187, 'f1': 0.9663366336633663, 'number': 246} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 17} | {'precision': 0.6, 'recall': 0.5806451612903226, 'f1': 0.5901639344262295, 'number': 31} | {'precision': 0.9333333333333333, 'recall': 0.7, 'f1': 0.8, 'number': 20} | {'precision': 0.9393939393939394, 'recall': 0.9253731343283582, 'f1': 0.9323308270676692, 'number': 67} | {'precision': 0.9523809523809523, 'recall': 0.8823529411764706, 'f1': 0.916030534351145, 'number': 68} | {'precision': 0.9310344827586207, 'recall': 0.9642857142857143, 'f1': 0.9473684210526316, 'number': 56} | {'precision': 0.7777777777777778, 'recall': 0.875, 'f1': 0.823529411764706, 'number': 16} | {'precision': 1.0, 'recall': 0.5, 'f1': 0.6666666666666666, 'number': 2} | {'precision': 0.71875, 'recall': 0.7931034482758621, 'f1': 0.7540983606557378, 'number': 29} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 7} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.9285714285714286, 'recall': 0.9578947368421052, 'f1': 0.9430051813471502, 'number': 95} | {'precision': 1.0, 'recall': 0.8571428571428571, 'f1': 0.923076923076923, 'number': 7} | {'precision': 0.875, 'recall': 0.7777777777777778, 'f1': 0.823529411764706, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9545454545454546, 'recall': 0.9692307692307692, 'f1': 0.9618320610687022, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} | 0.9066 | 0.9080 | 0.9073 | 0.9358 |
| 0.2563 | 13.0 | 650 | 0.2578 | {'precision': 0.9106382978723404, 'recall': 0.9727272727272728, 'f1': 0.9406593406593406, 'number': 220} | {'precision': 0.875, 'recall': 0.7, 'f1': 0.7777777777777777, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.8805970149253731, 'recall': 0.9402390438247012, 'f1': 0.9094412331406552, 'number': 251} | {'precision': 1.0, 'recall': 0.6363636363636364, 'f1': 0.7777777777777778, 'number': 11} | {'precision': 0.9490196078431372, 'recall': 0.983739837398374, 'f1': 0.9660678642714571, 'number': 246} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 17} | {'precision': 0.6666666666666666, 'recall': 0.6451612903225806, 'f1': 0.6557377049180327, 'number': 31} | {'precision': 0.8333333333333334, 'recall': 0.75, 'f1': 0.7894736842105262, 'number': 20} | {'precision': 0.9411764705882353, 'recall': 0.9552238805970149, 'f1': 0.9481481481481482, 'number': 67} | {'precision': 0.9523809523809523, 'recall': 0.8823529411764706, 'f1': 0.916030534351145, 'number': 68} | {'precision': 0.9310344827586207, 'recall': 0.9642857142857143, 'f1': 0.9473684210526316, 'number': 56} | {'precision': 0.7777777777777778, 'recall': 0.875, 'f1': 0.823529411764706, 'number': 16} | {'precision': 1.0, 'recall': 0.5, 'f1': 0.6666666666666666, 'number': 2} | {'precision': 0.7419354838709677, 'recall': 0.7931034482758621, 'f1': 0.7666666666666667, 'number': 29} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 7} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.9183673469387755, 'recall': 0.9473684210526315, 'f1': 0.9326424870466321, 'number': 95} | {'precision': 1.0, 'recall': 0.8571428571428571, 'f1': 0.923076923076923, 'number': 7} | {'precision': 0.875, 'recall': 0.7777777777777778, 'f1': 0.823529411764706, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9402985074626866, 'recall': 0.9692307692307692, 'f1': 0.9545454545454547, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} | 0.9097 | 0.9104 | 0.9100 | 0.9366 |
| 0.2384 | 14.0 | 700 | 0.2364 | {'precision': 0.9304347826086956, 'recall': 0.9727272727272728, 'f1': 0.9511111111111111, 'number': 220} | {'precision': 0.7, 'recall': 0.7, 'f1': 0.7, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.8931297709923665, 'recall': 0.9322709163346613, 'f1': 0.9122807017543859, 'number': 251} | {'precision': 1.0, 'recall': 0.6363636363636364, 'f1': 0.7777777777777778, 'number': 11} | {'precision': 0.9494163424124513, 'recall': 0.991869918699187, 'f1': 0.9701789264413518, 'number': 246} | {'precision': 1.0, 'recall': 0.23529411764705882, 'f1': 0.38095238095238093, 'number': 17} | {'precision': 0.6756756756756757, 'recall': 0.8064516129032258, 'f1': 0.7352941176470588, 'number': 31} | {'precision': 0.9333333333333333, 'recall': 0.7, 'f1': 0.8, 'number': 20} | {'precision': 0.9545454545454546, 'recall': 0.9402985074626866, 'f1': 0.9473684210526316, 'number': 67} | {'precision': 0.9523809523809523, 'recall': 0.8823529411764706, 'f1': 0.916030534351145, 'number': 68} | {'precision': 0.9310344827586207, 'recall': 0.9642857142857143, 'f1': 0.9473684210526316, 'number': 56} | {'precision': 0.7777777777777778, 'recall': 0.875, 'f1': 0.823529411764706, 'number': 16} | {'precision': 1.0, 'recall': 0.5, 'f1': 0.6666666666666666, 'number': 2} | {'precision': 0.696969696969697, 'recall': 0.7931034482758621, 'f1': 0.7419354838709677, 'number': 29} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 7} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.91, 'recall': 0.9578947368421052, 'f1': 0.9333333333333333, 'number': 95} | {'precision': 1.0, 'recall': 0.8571428571428571, 'f1': 0.923076923076923, 'number': 7} | {'precision': 0.7777777777777778, 'recall': 0.7777777777777778, 'f1': 0.7777777777777778, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9253731343283582, 'recall': 0.9538461538461539, 'f1': 0.9393939393939394, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} | 0.9122 | 0.9158 | 0.9140 | 0.9418 |
| 0.2166 | 15.0 | 750 | 0.2280 | {'precision': 0.9511111111111111, 'recall': 0.9727272727272728, 'f1': 0.9617977528089887, 'number': 220} | {'precision': 0.8888888888888888, 'recall': 0.8, 'f1': 0.8421052631578948, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.8935361216730038, 'recall': 0.9362549800796812, 'f1': 0.914396887159533, 'number': 251} | {'precision': 1.0, 'recall': 0.6363636363636364, 'f1': 0.7777777777777778, 'number': 11} | {'precision': 0.9420849420849421, 'recall': 0.991869918699187, 'f1': 0.9663366336633663, 'number': 246} | {'precision': 1.0, 'recall': 0.6470588235294118, 'f1': 0.7857142857142858, 'number': 17} | {'precision': 0.6410256410256411, 'recall': 0.8064516129032258, 'f1': 0.7142857142857142, 'number': 31} | {'precision': 0.8333333333333334, 'recall': 0.75, 'f1': 0.7894736842105262, 'number': 20} | {'precision': 0.9692307692307692, 'recall': 0.9402985074626866, 'f1': 0.9545454545454547, 'number': 67} | {'precision': 0.9523809523809523, 'recall': 0.8823529411764706, 'f1': 0.916030534351145, 'number': 68} | {'precision': 0.9473684210526315, 'recall': 0.9642857142857143, 'f1': 0.9557522123893805, 'number': 56} | {'precision': 0.8235294117647058, 'recall': 0.875, 'f1': 0.8484848484848485, 'number': 16} | {'precision': 0.5, 'recall': 0.5, 'f1': 0.5, 'number': 2} | {'precision': 0.7419354838709677, 'recall': 0.7931034482758621, 'f1': 0.7666666666666667, 'number': 29} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 7} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.9090909090909091, 'recall': 0.9473684210526315, 'f1': 0.9278350515463918, 'number': 95} | {'precision': 1.0, 'recall': 0.8571428571428571, 'f1': 0.923076923076923, 'number': 7} | {'precision': 0.875, 'recall': 0.7777777777777778, 'f1': 0.823529411764706, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9393939393939394, 'recall': 0.9538461538461539, 'f1': 0.9465648854961831, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} | 0.9178 | 0.9227 | 0.9202 | 0.9439 |
| 0.1959 | 16.0 | 800 | 0.2185 | {'precision': 0.9596412556053812, 'recall': 0.9727272727272728, 'f1': 0.9661399548532731, 'number': 220} | {'precision': 0.8888888888888888, 'recall': 0.8, 'f1': 0.8421052631578948, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.9007633587786259, 'recall': 0.9402390438247012, 'f1': 0.9200779727095517, 'number': 251} | {'precision': 1.0, 'recall': 0.6363636363636364, 'f1': 0.7777777777777778, 'number': 11} | {'precision': 0.9457364341085271, 'recall': 0.991869918699187, 'f1': 0.9682539682539683, 'number': 246} | {'precision': 1.0, 'recall': 0.7647058823529411, 'f1': 0.8666666666666666, 'number': 17} | {'precision': 0.6578947368421053, 'recall': 0.8064516129032258, 'f1': 0.7246376811594202, 'number': 31} | {'precision': 0.875, 'recall': 0.7, 'f1': 0.7777777777777777, 'number': 20} | {'precision': 0.9545454545454546, 'recall': 0.9402985074626866, 'f1': 0.9473684210526316, 'number': 67} | {'precision': 0.953125, 'recall': 0.8970588235294118, 'f1': 0.9242424242424244, 'number': 68} | {'precision': 0.9473684210526315, 'recall': 0.9642857142857143, 'f1': 0.9557522123893805, 'number': 56} | {'precision': 0.7368421052631579, 'recall': 0.875, 'f1': 0.7999999999999999, 'number': 16} | {'precision': 1.0, 'recall': 0.5, 'f1': 0.6666666666666666, 'number': 2} | {'precision': 0.7419354838709677, 'recall': 0.7931034482758621, 'f1': 0.7666666666666667, 'number': 29} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 7} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.9090909090909091, 'recall': 0.9473684210526315, 'f1': 0.9278350515463918, 'number': 95} | {'precision': 1.0, 'recall': 0.8571428571428571, 'f1': 0.923076923076923, 'number': 7} | {'precision': 0.875, 'recall': 0.7777777777777778, 'f1': 0.823529411764706, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9538461538461539, 'recall': 0.9538461538461539, 'f1': 0.9538461538461539, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} | 0.9222 | 0.9250 | 0.9236 | 0.9460 |
| 0.182 | 17.0 | 850 | 0.2071 | {'precision': 0.9683257918552036, 'recall': 0.9727272727272728, 'f1': 0.9705215419501134, 'number': 220} | {'precision': 0.7, 'recall': 0.7, 'f1': 0.7, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.9098039215686274, 'recall': 0.9243027888446215, 'f1': 0.9169960474308301, 'number': 251} | {'precision': 1.0, 'recall': 0.6363636363636364, 'f1': 0.7777777777777778, 'number': 11} | {'precision': 0.9529411764705882, 'recall': 0.9878048780487805, 'f1': 0.9700598802395209, 'number': 246} | {'precision': 0.9375, 'recall': 0.8823529411764706, 'f1': 0.9090909090909091, 'number': 17} | {'precision': 0.5909090909090909, 'recall': 0.8387096774193549, 'f1': 0.6933333333333334, 'number': 31} | {'precision': 0.875, 'recall': 0.7, 'f1': 0.7777777777777777, 'number': 20} | {'precision': 0.9692307692307692, 'recall': 0.9402985074626866, 'f1': 0.9545454545454547, 'number': 67} | {'precision': 0.9538461538461539, 'recall': 0.9117647058823529, 'f1': 0.9323308270676691, 'number': 68} | {'precision': 0.9310344827586207, 'recall': 0.9642857142857143, 'f1': 0.9473684210526316, 'number': 56} | {'precision': 0.7368421052631579, 'recall': 0.875, 'f1': 0.7999999999999999, 'number': 16} | {'precision': 1.0, 'recall': 0.5, 'f1': 0.6666666666666666, 'number': 2} | {'precision': 0.696969696969697, 'recall': 0.7931034482758621, 'f1': 0.7419354838709677, 'number': 29} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 7} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.9292929292929293, 'recall': 0.968421052631579, 'f1': 0.9484536082474226, 'number': 95} | {'precision': 1.0, 'recall': 0.8571428571428571, 'f1': 0.923076923076923, 'number': 7} | {'precision': 0.7777777777777778, 'recall': 0.7777777777777778, 'f1': 0.7777777777777778, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9393939393939394, 'recall': 0.9538461538461539, 'f1': 0.9465648854961831, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} | 0.9201 | 0.9250 | 0.9225 | 0.9452 |
| 0.1641 | 18.0 | 900 | 0.2048 | {'precision': 0.9726027397260274, 'recall': 0.9681818181818181, 'f1': 0.970387243735763, 'number': 220} | {'precision': 0.7, 'recall': 0.7, 'f1': 0.7, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.9042145593869731, 'recall': 0.9402390438247012, 'f1': 0.9218749999999999, 'number': 251} | {'precision': 1.0, 'recall': 0.7272727272727273, 'f1': 0.8421052631578948, 'number': 11} | {'precision': 0.9529411764705882, 'recall': 0.9878048780487805, 'f1': 0.9700598802395209, 'number': 246} | {'precision': 0.9375, 'recall': 0.8823529411764706, 'f1': 0.9090909090909091, 'number': 17} | {'precision': 0.65, 'recall': 0.8387096774193549, 'f1': 0.7323943661971831, 'number': 31} | {'precision': 0.875, 'recall': 0.7, 'f1': 0.7777777777777777, 'number': 20} | {'precision': 0.9545454545454546, 'recall': 0.9402985074626866, 'f1': 0.9473684210526316, 'number': 67} | {'precision': 0.953125, 'recall': 0.8970588235294118, 'f1': 0.9242424242424244, 'number': 68} | {'precision': 0.9310344827586207, 'recall': 0.9642857142857143, 'f1': 0.9473684210526316, 'number': 56} | {'precision': 0.7368421052631579, 'recall': 0.875, 'f1': 0.7999999999999999, 'number': 16} | {'precision': 1.0, 'recall': 0.5, 'f1': 0.6666666666666666, 'number': 2} | {'precision': 0.7419354838709677, 'recall': 0.7931034482758621, 'f1': 0.7666666666666667, 'number': 29} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 7} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.9191919191919192, 'recall': 0.9578947368421052, 'f1': 0.9381443298969072, 'number': 95} | {'precision': 0.8571428571428571, 'recall': 0.8571428571428571, 'f1': 0.8571428571428571, 'number': 7} | {'precision': 0.875, 'recall': 0.7777777777777778, 'f1': 0.823529411764706, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9393939393939394, 'recall': 0.9538461538461539, 'f1': 0.9465648854961831, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} | 0.9216 | 0.9266 | 0.9241 | 0.9477 |
| 0.1511 | 19.0 | 950 | 0.1996 | {'precision': 0.9771689497716894, 'recall': 0.9727272727272728, 'f1': 0.9749430523917996, 'number': 220} | {'precision': 0.7, 'recall': 0.7, 'f1': 0.7, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.9140625, 'recall': 0.9322709163346613, 'f1': 0.923076923076923, 'number': 251} | {'precision': 1.0, 'recall': 0.7272727272727273, 'f1': 0.8421052631578948, 'number': 11} | {'precision': 0.9457364341085271, 'recall': 0.991869918699187, 'f1': 0.9682539682539683, 'number': 246} | {'precision': 0.9411764705882353, 'recall': 0.9411764705882353, 'f1': 0.9411764705882353, 'number': 17} | {'precision': 0.6428571428571429, 'recall': 0.8709677419354839, 'f1': 0.7397260273972603, 'number': 31} | {'precision': 0.875, 'recall': 0.7, 'f1': 0.7777777777777777, 'number': 20} | {'precision': 0.984375, 'recall': 0.9402985074626866, 'f1': 0.9618320610687023, 'number': 67} | {'precision': 0.9692307692307692, 'recall': 0.9264705882352942, 'f1': 0.9473684210526316, 'number': 68} | {'precision': 0.9649122807017544, 'recall': 0.9821428571428571, 'f1': 0.9734513274336283, 'number': 56} | {'precision': 0.7368421052631579, 'recall': 0.875, 'f1': 0.7999999999999999, 'number': 16} | {'precision': 0.5, 'recall': 0.5, 'f1': 0.5, 'number': 2} | {'precision': 0.6764705882352942, 'recall': 0.7931034482758621, 'f1': 0.7301587301587301, 'number': 29} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 7} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.9191919191919192, 'recall': 0.9578947368421052, 'f1': 0.9381443298969072, 'number': 95} | {'precision': 1.0, 'recall': 0.8571428571428571, 'f1': 0.923076923076923, 'number': 7} | {'precision': 0.7777777777777778, 'recall': 0.7777777777777778, 'f1': 0.7777777777777778, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9538461538461539, 'recall': 0.9538461538461539, 'f1': 0.9538461538461539, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} | 0.9233 | 0.9304 | 0.9269 | 0.9494 |
| 0.1429 | 20.0 | 1000 | 0.2035 | {'precision': 0.9771689497716894, 'recall': 0.9727272727272728, 'f1': 0.9749430523917996, 'number': 220} | {'precision': 0.8, 'recall': 0.8, 'f1': 0.8000000000000002, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.9115384615384615, 'recall': 0.9442231075697212, 'f1': 0.9275929549902153, 'number': 251} | {'precision': 1.0, 'recall': 0.7272727272727273, 'f1': 0.8421052631578948, 'number': 11} | {'precision': 0.9420849420849421, 'recall': 0.991869918699187, 'f1': 0.9663366336633663, 'number': 246} | {'precision': 0.9375, 'recall': 0.8823529411764706, 'f1': 0.9090909090909091, 'number': 17} | {'precision': 0.7027027027027027, 'recall': 0.8387096774193549, 'f1': 0.7647058823529411, 'number': 31} | {'precision': 0.875, 'recall': 0.7, 'f1': 0.7777777777777777, 'number': 20} | {'precision': 0.984375, 'recall': 0.9402985074626866, 'f1': 0.9618320610687023, 'number': 67} | {'precision': 0.9692307692307692, 'recall': 0.9264705882352942, 'f1': 0.9473684210526316, 'number': 68} | {'precision': 0.9649122807017544, 'recall': 0.9821428571428571, 'f1': 0.9734513274336283, 'number': 56} | {'precision': 0.7777777777777778, 'recall': 0.875, 'f1': 0.823529411764706, 'number': 16} | {'precision': 0.3333333333333333, 'recall': 0.5, 'f1': 0.4, 'number': 2} | {'precision': 0.71875, 'recall': 0.7931034482758621, 'f1': 0.7540983606557378, 'number': 29} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 7} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.9191919191919192, 'recall': 0.9578947368421052, 'f1': 0.9381443298969072, 'number': 95} | {'precision': 0.8571428571428571, 'recall': 0.8571428571428571, 'f1': 0.8571428571428571, 'number': 7} | {'precision': 0.875, 'recall': 0.7777777777777778, 'f1': 0.823529411764706, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9253731343283582, 'recall': 0.9538461538461539, 'f1': 0.9393939393939394, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} | 0.9263 | 0.9320 | 0.9291 | 0.9528 |
| 0.1374 | 21.0 | 1050 | 0.1901 | {'precision': 0.9771689497716894, 'recall': 0.9727272727272728, 'f1': 0.9749430523917996, 'number': 220} | {'precision': 0.7, 'recall': 0.7, 'f1': 0.7, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.9173228346456693, 'recall': 0.9282868525896414, 'f1': 0.9227722772277227, 'number': 251} | {'precision': 1.0, 'recall': 0.7272727272727273, 'f1': 0.8421052631578948, 'number': 11} | {'precision': 0.9529411764705882, 'recall': 0.9878048780487805, 'f1': 0.9700598802395209, 'number': 246} | {'precision': 0.9411764705882353, 'recall': 0.9411764705882353, 'f1': 0.9411764705882353, 'number': 17} | {'precision': 0.6511627906976745, 'recall': 0.9032258064516129, 'f1': 0.7567567567567568, 'number': 31} | {'precision': 0.8235294117647058, 'recall': 0.7, 'f1': 0.7567567567567567, 'number': 20} | {'precision': 0.9692307692307692, 'recall': 0.9402985074626866, 'f1': 0.9545454545454547, 'number': 67} | {'precision': 0.9841269841269841, 'recall': 0.9117647058823529, 'f1': 0.9465648854961831, 'number': 68} | {'precision': 0.9649122807017544, 'recall': 0.9821428571428571, 'f1': 0.9734513274336283, 'number': 56} | {'precision': 0.8823529411764706, 'recall': 0.9375, 'f1': 0.9090909090909091, 'number': 16} | {'precision': 0.6666666666666666, 'recall': 1.0, 'f1': 0.8, 'number': 2} | {'precision': 0.75, 'recall': 0.8275862068965517, 'f1': 0.7868852459016394, 'number': 29} | {'precision': 0.5, 'recall': 0.14285714285714285, 'f1': 0.22222222222222224, 'number': 7} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.9090909090909091, 'recall': 0.9473684210526315, 'f1': 0.9278350515463918, 'number': 95} | {'precision': 0.8571428571428571, 'recall': 0.8571428571428571, 'f1': 0.8571428571428571, 'number': 7} | {'precision': 0.7, 'recall': 0.7777777777777778, 'f1': 0.7368421052631577, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9253731343283582, 'recall': 0.9538461538461539, 'f1': 0.9393939393939394, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} | 0.9262 | 0.9312 | 0.9287 | 0.9528 |
| 0.1271 | 22.0 | 1100 | 0.1882 | {'precision': 0.9771689497716894, 'recall': 0.9727272727272728, 'f1': 0.9749430523917996, 'number': 220} | {'precision': 0.7, 'recall': 0.7, 'f1': 0.7, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.9182879377431906, 'recall': 0.9402390438247012, 'f1': 0.9291338582677166, 'number': 251} | {'precision': 1.0, 'recall': 0.7272727272727273, 'f1': 0.8421052631578948, 'number': 11} | {'precision': 0.9565217391304348, 'recall': 0.983739837398374, 'f1': 0.9699398797595191, 'number': 246} | {'precision': 0.9444444444444444, 'recall': 1.0, 'f1': 0.9714285714285714, 'number': 17} | {'precision': 0.675, 'recall': 0.8709677419354839, 'f1': 0.7605633802816901, 'number': 31} | {'precision': 0.875, 'recall': 0.7, 'f1': 0.7777777777777777, 'number': 20} | {'precision': 0.9545454545454546, 'recall': 0.9402985074626866, 'f1': 0.9473684210526316, 'number': 67} | {'precision': 0.984375, 'recall': 0.9264705882352942, 'f1': 0.9545454545454545, 'number': 68} | {'precision': 0.9649122807017544, 'recall': 0.9821428571428571, 'f1': 0.9734513274336283, 'number': 56} | {'precision': 0.8823529411764706, 'recall': 0.9375, 'f1': 0.9090909090909091, 'number': 16} | {'precision': 0.6666666666666666, 'recall': 1.0, 'f1': 0.8, 'number': 2} | {'precision': 0.7741935483870968, 'recall': 0.8275862068965517, 'f1': 0.7999999999999999, 'number': 29} | {'precision': 0.5, 'recall': 0.14285714285714285, 'f1': 0.22222222222222224, 'number': 7} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.9285714285714286, 'recall': 0.9578947368421052, 'f1': 0.9430051813471502, 'number': 95} | {'precision': 0.8571428571428571, 'recall': 0.8571428571428571, 'f1': 0.8571428571428571, 'number': 7} | {'precision': 0.875, 'recall': 0.7777777777777778, 'f1': 0.823529411764706, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9402985074626866, 'recall': 0.9692307692307692, 'f1': 0.9545454545454547, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} | 0.9329 | 0.9351 | 0.9340 | 0.9545 |
| 0.1204 | 23.0 | 1150 | 0.1910 | {'precision': 0.9771689497716894, 'recall': 0.9727272727272728, 'f1': 0.9749430523917996, 'number': 220} | {'precision': 0.8, 'recall': 0.8, 'f1': 0.8000000000000002, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.9115384615384615, 'recall': 0.9442231075697212, 'f1': 0.9275929549902153, 'number': 251} | {'precision': 1.0, 'recall': 0.7272727272727273, 'f1': 0.8421052631578948, 'number': 11} | {'precision': 0.9494163424124513, 'recall': 0.991869918699187, 'f1': 0.9701789264413518, 'number': 246} | {'precision': 0.9411764705882353, 'recall': 0.9411764705882353, 'f1': 0.9411764705882353, 'number': 17} | {'precision': 0.6842105263157895, 'recall': 0.8387096774193549, 'f1': 0.7536231884057972, 'number': 31} | {'precision': 0.875, 'recall': 0.7, 'f1': 0.7777777777777777, 'number': 20} | {'precision': 0.9846153846153847, 'recall': 0.9552238805970149, 'f1': 0.9696969696969696, 'number': 67} | {'precision': 0.9846153846153847, 'recall': 0.9411764705882353, 'f1': 0.962406015037594, 'number': 68} | {'precision': 0.9649122807017544, 'recall': 0.9821428571428571, 'f1': 0.9734513274336283, 'number': 56} | {'precision': 0.8823529411764706, 'recall': 0.9375, 'f1': 0.9090909090909091, 'number': 16} | {'precision': 0.3333333333333333, 'recall': 0.5, 'f1': 0.4, 'number': 2} | {'precision': 0.7741935483870968, 'recall': 0.8275862068965517, 'f1': 0.7999999999999999, 'number': 29} | {'precision': 0.5, 'recall': 0.14285714285714285, 'f1': 0.22222222222222224, 'number': 7} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.9292929292929293, 'recall': 0.968421052631579, 'f1': 0.9484536082474226, 'number': 95} | {'precision': 0.8571428571428571, 'recall': 0.8571428571428571, 'f1': 0.8571428571428571, 'number': 7} | {'precision': 0.875, 'recall': 0.7777777777777778, 'f1': 0.823529411764706, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9253731343283582, 'recall': 0.9538461538461539, 'f1': 0.9393939393939394, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} | 0.9309 | 0.9374 | 0.9342 | 0.9575 |
| 0.1142 | 24.0 | 1200 | 0.1861 | {'precision': 0.9772727272727273, 'recall': 0.9772727272727273, 'f1': 0.9772727272727273, 'number': 220} | {'precision': 0.8, 'recall': 0.8, 'f1': 0.8000000000000002, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.9111969111969112, 'recall': 0.9402390438247012, 'f1': 0.9254901960784314, 'number': 251} | {'precision': 1.0, 'recall': 0.7272727272727273, 'f1': 0.8421052631578948, 'number': 11} | {'precision': 0.9494163424124513, 'recall': 0.991869918699187, 'f1': 0.9701789264413518, 'number': 246} | {'precision': 1.0, 'recall': 0.9411764705882353, 'f1': 0.9696969696969697, 'number': 17} | {'precision': 0.675, 'recall': 0.8709677419354839, 'f1': 0.7605633802816901, 'number': 31} | {'precision': 0.875, 'recall': 0.7, 'f1': 0.7777777777777777, 'number': 20} | {'precision': 0.9846153846153847, 'recall': 0.9552238805970149, 'f1': 0.9696969696969696, 'number': 67} | {'precision': 0.984375, 'recall': 0.9264705882352942, 'f1': 0.9545454545454545, 'number': 68} | {'precision': 0.9649122807017544, 'recall': 0.9821428571428571, 'f1': 0.9734513274336283, 'number': 56} | {'precision': 0.8823529411764706, 'recall': 0.9375, 'f1': 0.9090909090909091, 'number': 16} | {'precision': 0.6666666666666666, 'recall': 1.0, 'f1': 0.8, 'number': 2} | {'precision': 0.7575757575757576, 'recall': 0.8620689655172413, 'f1': 0.8064516129032258, 'number': 29} | {'precision': 0.3333333333333333, 'recall': 0.14285714285714285, 'f1': 0.2, 'number': 7} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.9285714285714286, 'recall': 0.9578947368421052, 'f1': 0.9430051813471502, 'number': 95} | {'precision': 0.8571428571428571, 'recall': 0.8571428571428571, 'f1': 0.8571428571428571, 'number': 7} | {'precision': 0.875, 'recall': 0.7777777777777778, 'f1': 0.823529411764706, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9393939393939394, 'recall': 0.9538461538461539, 'f1': 0.9465648854961831, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} | 0.9317 | 0.9382 | 0.9349 | 0.9566 |
| 0.1094 | 25.0 | 1250 | 0.1829 | {'precision': 0.9815668202764977, 'recall': 0.9681818181818181, 'f1': 0.9748283752860412, 'number': 220} | {'precision': 0.7, 'recall': 0.7, 'f1': 0.7, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.9182879377431906, 'recall': 0.9402390438247012, 'f1': 0.9291338582677166, 'number': 251} | {'precision': 1.0, 'recall': 0.7272727272727273, 'f1': 0.8421052631578948, 'number': 11} | {'precision': 0.953307392996109, 'recall': 0.9959349593495935, 'f1': 0.9741550695825051, 'number': 246} | {'precision': 0.8947368421052632, 'recall': 1.0, 'f1': 0.9444444444444444, 'number': 17} | {'precision': 0.6923076923076923, 'recall': 0.8709677419354839, 'f1': 0.7714285714285716, 'number': 31} | {'precision': 0.9333333333333333, 'recall': 0.7, 'f1': 0.8, 'number': 20} | {'precision': 0.9846153846153847, 'recall': 0.9552238805970149, 'f1': 0.9696969696969696, 'number': 67} | {'precision': 0.984375, 'recall': 0.9264705882352942, 'f1': 0.9545454545454545, 'number': 68} | {'precision': 0.9649122807017544, 'recall': 0.9821428571428571, 'f1': 0.9734513274336283, 'number': 56} | {'precision': 0.8333333333333334, 'recall': 0.9375, 'f1': 0.8823529411764706, 'number': 16} | {'precision': 0.3333333333333333, 'recall': 0.5, 'f1': 0.4, 'number': 2} | {'precision': 0.78125, 'recall': 0.8620689655172413, 'f1': 0.8196721311475409, 'number': 29} | {'precision': 0.25, 'recall': 0.14285714285714285, 'f1': 0.18181818181818182, 'number': 7} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.9191919191919192, 'recall': 0.9578947368421052, 'f1': 0.9381443298969072, 'number': 95} | {'precision': 0.8571428571428571, 'recall': 0.8571428571428571, 'f1': 0.8571428571428571, 'number': 7} | {'precision': 0.7, 'recall': 0.7777777777777778, 'f1': 0.7368421052631577, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9538461538461539, 'recall': 0.9538461538461539, 'f1': 0.9538461538461539, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} | 0.9302 | 0.9366 | 0.9334 | 0.9566 |
| 0.0995 | 26.0 | 1300 | 0.1832 | {'precision': 0.9815668202764977, 'recall': 0.9681818181818181, 'f1': 0.9748283752860412, 'number': 220} | {'precision': 0.7, 'recall': 0.7, 'f1': 0.7, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.9182879377431906, 'recall': 0.9402390438247012, 'f1': 0.9291338582677166, 'number': 251} | {'precision': 1.0, 'recall': 0.7272727272727273, 'f1': 0.8421052631578948, 'number': 11} | {'precision': 0.9603174603174603, 'recall': 0.983739837398374, 'f1': 0.9718875502008032, 'number': 246} | {'precision': 0.8947368421052632, 'recall': 1.0, 'f1': 0.9444444444444444, 'number': 17} | {'precision': 0.6923076923076923, 'recall': 0.8709677419354839, 'f1': 0.7714285714285716, 'number': 31} | {'precision': 0.8888888888888888, 'recall': 0.8, 'f1': 0.8421052631578948, 'number': 20} | {'precision': 0.9538461538461539, 'recall': 0.9253731343283582, 'f1': 0.9393939393939394, 'number': 67} | {'precision': 0.984375, 'recall': 0.9264705882352942, 'f1': 0.9545454545454545, 'number': 68} | {'precision': 0.9649122807017544, 'recall': 0.9821428571428571, 'f1': 0.9734513274336283, 'number': 56} | {'precision': 0.8823529411764706, 'recall': 0.9375, 'f1': 0.9090909090909091, 'number': 16} | {'precision': 0.6666666666666666, 'recall': 1.0, 'f1': 0.8, 'number': 2} | {'precision': 0.8064516129032258, 'recall': 0.8620689655172413, 'f1': 0.8333333333333334, 'number': 29} | {'precision': 0.25, 'recall': 0.14285714285714285, 'f1': 0.18181818181818182, 'number': 7} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.9285714285714286, 'recall': 0.9578947368421052, 'f1': 0.9430051813471502, 'number': 95} | {'precision': 0.8571428571428571, 'recall': 0.8571428571428571, 'f1': 0.8571428571428571, 'number': 7} | {'precision': 0.7777777777777778, 'recall': 0.7777777777777778, 'f1': 0.7777777777777778, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9545454545454546, 'recall': 0.9692307692307692, 'f1': 0.9618320610687022, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} | 0.9330 | 0.9359 | 0.9344 | 0.9575 |
| 0.096 | 27.0 | 1350 | 0.1860 | {'precision': 0.9771689497716894, 'recall': 0.9727272727272728, 'f1': 0.9749430523917996, 'number': 220} | {'precision': 0.8, 'recall': 0.8, 'f1': 0.8000000000000002, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.918918918918919, 'recall': 0.9482071713147411, 'f1': 0.9333333333333333, 'number': 251} | {'precision': 1.0, 'recall': 0.7272727272727273, 'f1': 0.8421052631578948, 'number': 11} | {'precision': 0.9494163424124513, 'recall': 0.991869918699187, 'f1': 0.9701789264413518, 'number': 246} | {'precision': 0.9411764705882353, 'recall': 0.9411764705882353, 'f1': 0.9411764705882353, 'number': 17} | {'precision': 0.7428571428571429, 'recall': 0.8387096774193549, 'f1': 0.787878787878788, 'number': 31} | {'precision': 0.9375, 'recall': 0.75, 'f1': 0.8333333333333334, 'number': 20} | {'precision': 0.9692307692307692, 'recall': 0.9402985074626866, 'f1': 0.9545454545454547, 'number': 67} | {'precision': 0.984375, 'recall': 0.9264705882352942, 'f1': 0.9545454545454545, 'number': 68} | {'precision': 0.9649122807017544, 'recall': 0.9821428571428571, 'f1': 0.9734513274336283, 'number': 56} | {'precision': 0.8823529411764706, 'recall': 0.9375, 'f1': 0.9090909090909091, 'number': 16} | {'precision': 0.3333333333333333, 'recall': 0.5, 'f1': 0.4, 'number': 2} | {'precision': 0.8064516129032258, 'recall': 0.8620689655172413, 'f1': 0.8333333333333334, 'number': 29} | {'precision': 0.25, 'recall': 0.14285714285714285, 'f1': 0.18181818181818182, 'number': 7} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.9191919191919192, 'recall': 0.9578947368421052, 'f1': 0.9381443298969072, 'number': 95} | {'precision': 0.8571428571428571, 'recall': 0.8571428571428571, 'f1': 0.8571428571428571, 'number': 7} | {'precision': 0.7777777777777778, 'recall': 0.7777777777777778, 'f1': 0.7777777777777778, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9393939393939394, 'recall': 0.9538461538461539, 'f1': 0.9465648854961831, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} | 0.9331 | 0.9374 | 0.9352 | 0.9588 |
| 0.0905 | 28.0 | 1400 | 0.1847 | {'precision': 0.981651376146789, 'recall': 0.9727272727272728, 'f1': 0.9771689497716896, 'number': 220} | {'precision': 0.7, 'recall': 0.7, 'f1': 0.7, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.9147286821705426, 'recall': 0.9402390438247012, 'f1': 0.9273084479371317, 'number': 251} | {'precision': 1.0, 'recall': 0.7272727272727273, 'f1': 0.8421052631578948, 'number': 11} | {'precision': 0.9607843137254902, 'recall': 0.9959349593495935, 'f1': 0.9780439121756488, 'number': 246} | {'precision': 0.9444444444444444, 'recall': 1.0, 'f1': 0.9714285714285714, 'number': 17} | {'precision': 0.6842105263157895, 'recall': 0.8387096774193549, 'f1': 0.7536231884057972, 'number': 31} | {'precision': 0.8888888888888888, 'recall': 0.8, 'f1': 0.8421052631578948, 'number': 20} | {'precision': 0.9846153846153847, 'recall': 0.9552238805970149, 'f1': 0.9696969696969696, 'number': 67} | {'precision': 0.984375, 'recall': 0.9264705882352942, 'f1': 0.9545454545454545, 'number': 68} | {'precision': 0.9649122807017544, 'recall': 0.9821428571428571, 'f1': 0.9734513274336283, 'number': 56} | {'precision': 0.9375, 'recall': 0.9375, 'f1': 0.9375, 'number': 16} | {'precision': 0.3333333333333333, 'recall': 0.5, 'f1': 0.4, 'number': 2} | {'precision': 0.8709677419354839, 'recall': 0.9310344827586207, 'f1': 0.9, 'number': 29} | {'precision': 1.0, 'recall': 0.5714285714285714, 'f1': 0.7272727272727273, 'number': 7} | {'precision': 0.3333333333333333, 'recall': 0.3333333333333333, 'f1': 0.3333333333333333, 'number': 3} | {'precision': 0.9387755102040817, 'recall': 0.968421052631579, 'f1': 0.9533678756476685, 'number': 95} | {'precision': 0.8571428571428571, 'recall': 0.8571428571428571, 'f1': 0.8571428571428571, 'number': 7} | {'precision': 0.7, 'recall': 0.7777777777777778, 'f1': 0.7368421052631577, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9538461538461539, 'recall': 0.9538461538461539, 'f1': 0.9538461538461539, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} | 0.9378 | 0.9436 | 0.9407 | 0.9600 |
| 0.0866 | 29.0 | 1450 | 0.1840 | {'precision': 0.9771689497716894, 'recall': 0.9727272727272728, 'f1': 0.9749430523917996, 'number': 220} | {'precision': 0.7, 'recall': 0.7, 'f1': 0.7, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.9111969111969112, 'recall': 0.9402390438247012, 'f1': 0.9254901960784314, 'number': 251} | {'precision': 1.0, 'recall': 0.7272727272727273, 'f1': 0.8421052631578948, 'number': 11} | {'precision': 0.95703125, 'recall': 0.9959349593495935, 'f1': 0.9760956175298804, 'number': 246} | {'precision': 0.9444444444444444, 'recall': 1.0, 'f1': 0.9714285714285714, 'number': 17} | {'precision': 0.7105263157894737, 'recall': 0.8709677419354839, 'f1': 0.782608695652174, 'number': 31} | {'precision': 0.9444444444444444, 'recall': 0.85, 'f1': 0.8947368421052632, 'number': 20} | {'precision': 0.9846153846153847, 'recall': 0.9552238805970149, 'f1': 0.9696969696969696, 'number': 67} | {'precision': 0.984375, 'recall': 0.9264705882352942, 'f1': 0.9545454545454545, 'number': 68} | {'precision': 0.9649122807017544, 'recall': 0.9821428571428571, 'f1': 0.9734513274336283, 'number': 56} | {'precision': 0.8823529411764706, 'recall': 0.9375, 'f1': 0.9090909090909091, 'number': 16} | {'precision': 0.5, 'recall': 0.5, 'f1': 0.5, 'number': 2} | {'precision': 0.8333333333333334, 'recall': 0.8620689655172413, 'f1': 0.847457627118644, 'number': 29} | {'precision': 0.5, 'recall': 0.2857142857142857, 'f1': 0.36363636363636365, 'number': 7} | {'precision': 0.3333333333333333, 'recall': 0.3333333333333333, 'f1': 0.3333333333333333, 'number': 3} | {'precision': 0.9292929292929293, 'recall': 0.968421052631579, 'f1': 0.9484536082474226, 'number': 95} | {'precision': 0.8571428571428571, 'recall': 0.8571428571428571, 'f1': 0.8571428571428571, 'number': 7} | {'precision': 0.7777777777777778, 'recall': 0.7777777777777778, 'f1': 0.7777777777777778, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9538461538461539, 'recall': 0.9538461538461539, 'f1': 0.9538461538461539, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} | 0.9348 | 0.9420 | 0.9384 | 0.9600 |
| 0.083 | 30.0 | 1500 | 0.1803 | {'precision': 0.9814814814814815, 'recall': 0.9636363636363636, 'f1': 0.9724770642201834, 'number': 220} | {'precision': 0.7, 'recall': 0.7, 'f1': 0.7, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.914396887159533, 'recall': 0.9362549800796812, 'f1': 0.9251968503937008, 'number': 251} | {'precision': 1.0, 'recall': 0.7272727272727273, 'f1': 0.8421052631578948, 'number': 11} | {'precision': 0.9606299212598425, 'recall': 0.991869918699187, 'f1': 0.976, 'number': 246} | {'precision': 0.85, 'recall': 1.0, 'f1': 0.9189189189189189, 'number': 17} | {'precision': 0.6666666666666666, 'recall': 0.9032258064516129, 'f1': 0.7671232876712328, 'number': 31} | {'precision': 0.8888888888888888, 'recall': 0.8, 'f1': 0.8421052631578948, 'number': 20} | {'precision': 0.9692307692307692, 'recall': 0.9402985074626866, 'f1': 0.9545454545454547, 'number': 67} | {'precision': 0.96875, 'recall': 0.9117647058823529, 'f1': 0.9393939393939394, 'number': 68} | {'precision': 0.9649122807017544, 'recall': 0.9821428571428571, 'f1': 0.9734513274336283, 'number': 56} | {'precision': 0.8235294117647058, 'recall': 0.875, 'f1': 0.8484848484848485, 'number': 16} | {'precision': 0.6666666666666666, 'recall': 1.0, 'f1': 0.8, 'number': 2} | {'precision': 0.84375, 'recall': 0.9310344827586207, 'f1': 0.8852459016393444, 'number': 29} | {'precision': 1.0, 'recall': 0.5714285714285714, 'f1': 0.7272727272727273, 'number': 7} | {'precision': 0.5, 'recall': 0.3333333333333333, 'f1': 0.4, 'number': 3} | {'precision': 0.9387755102040817, 'recall': 0.968421052631579, 'f1': 0.9533678756476685, 'number': 95} | {'precision': 0.8571428571428571, 'recall': 0.8571428571428571, 'f1': 0.8571428571428571, 'number': 7} | {'precision': 0.7777777777777778, 'recall': 0.7777777777777778, 'f1': 0.7777777777777778, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9538461538461539, 'recall': 0.9538461538461539, 'f1': 0.9538461538461539, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} | 0.9333 | 0.9405 | 0.9369 | 0.9579 |
| 0.0779 | 31.0 | 1550 | 0.1871 | {'precision': 0.9771689497716894, 'recall': 0.9727272727272728, 'f1': 0.9749430523917996, 'number': 220} | {'precision': 0.7, 'recall': 0.7, 'f1': 0.7, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.9153846153846154, 'recall': 0.9482071713147411, 'f1': 0.9315068493150686, 'number': 251} | {'precision': 1.0, 'recall': 0.7272727272727273, 'f1': 0.8421052631578948, 'number': 11} | {'precision': 0.9566929133858267, 'recall': 0.9878048780487805, 'f1': 0.9719999999999999, 'number': 246} | {'precision': 0.9411764705882353, 'recall': 0.9411764705882353, 'f1': 0.9411764705882353, 'number': 17} | {'precision': 0.7575757575757576, 'recall': 0.8064516129032258, 'f1': 0.7812499999999999, 'number': 31} | {'precision': 0.9444444444444444, 'recall': 0.85, 'f1': 0.8947368421052632, 'number': 20} | {'precision': 0.9538461538461539, 'recall': 0.9253731343283582, 'f1': 0.9393939393939394, 'number': 67} | {'precision': 0.984375, 'recall': 0.9264705882352942, 'f1': 0.9545454545454545, 'number': 68} | {'precision': 0.9649122807017544, 'recall': 0.9821428571428571, 'f1': 0.9734513274336283, 'number': 56} | {'precision': 0.8823529411764706, 'recall': 0.9375, 'f1': 0.9090909090909091, 'number': 16} | {'precision': 0.5, 'recall': 0.5, 'f1': 0.5, 'number': 2} | {'precision': 0.8709677419354839, 'recall': 0.9310344827586207, 'f1': 0.9, 'number': 29} | {'precision': 1.0, 'recall': 0.5714285714285714, 'f1': 0.7272727272727273, 'number': 7} | {'precision': 0.3333333333333333, 'recall': 0.3333333333333333, 'f1': 0.3333333333333333, 'number': 3} | {'precision': 0.9484536082474226, 'recall': 0.968421052631579, 'f1': 0.9583333333333333, 'number': 95} | {'precision': 0.8571428571428571, 'recall': 0.8571428571428571, 'f1': 0.8571428571428571, 'number': 7} | {'precision': 0.7777777777777778, 'recall': 0.7777777777777778, 'f1': 0.7777777777777778, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9253731343283582, 'recall': 0.9538461538461539, 'f1': 0.9393939393939394, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} | 0.9384 | 0.9413 | 0.9398 | 0.9596 |
| 0.0753 | 32.0 | 1600 | 0.1819 | {'precision': 0.9815668202764977, 'recall': 0.9681818181818181, 'f1': 0.9748283752860412, 'number': 220} | {'precision': 0.7, 'recall': 0.7, 'f1': 0.7, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.9221789883268483, 'recall': 0.9442231075697212, 'f1': 0.9330708661417324, 'number': 251} | {'precision': 1.0, 'recall': 0.7272727272727273, 'f1': 0.8421052631578948, 'number': 11} | {'precision': 0.95703125, 'recall': 0.9959349593495935, 'f1': 0.9760956175298804, 'number': 246} | {'precision': 0.8947368421052632, 'recall': 1.0, 'f1': 0.9444444444444444, 'number': 17} | {'precision': 0.7297297297297297, 'recall': 0.8709677419354839, 'f1': 0.7941176470588235, 'number': 31} | {'precision': 0.9444444444444444, 'recall': 0.85, 'f1': 0.8947368421052632, 'number': 20} | {'precision': 0.9846153846153847, 'recall': 0.9552238805970149, 'f1': 0.9696969696969696, 'number': 67} | {'precision': 0.984375, 'recall': 0.9264705882352942, 'f1': 0.9545454545454545, 'number': 68} | {'precision': 0.9649122807017544, 'recall': 0.9821428571428571, 'f1': 0.9734513274336283, 'number': 56} | {'precision': 0.9375, 'recall': 0.9375, 'f1': 0.9375, 'number': 16} | {'precision': 0.3333333333333333, 'recall': 0.5, 'f1': 0.4, 'number': 2} | {'precision': 0.9, 'recall': 0.9310344827586207, 'f1': 0.9152542372881356, 'number': 29} | {'precision': 1.0, 'recall': 0.5714285714285714, 'f1': 0.7272727272727273, 'number': 7} | {'precision': 0.3333333333333333, 'recall': 0.3333333333333333, 'f1': 0.3333333333333333, 'number': 3} | {'precision': 0.9387755102040817, 'recall': 0.968421052631579, 'f1': 0.9533678756476685, 'number': 95} | {'precision': 0.8571428571428571, 'recall': 0.8571428571428571, 'f1': 0.8571428571428571, 'number': 7} | {'precision': 0.7777777777777778, 'recall': 0.7777777777777778, 'f1': 0.7777777777777778, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9393939393939394, 'recall': 0.9538461538461539, 'f1': 0.9465648854961831, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} | 0.9408 | 0.9451 | 0.9429 | 0.9630 |
| 0.0714 | 33.0 | 1650 | 0.1810 | {'precision': 0.9815668202764977, 'recall': 0.9681818181818181, 'f1': 0.9748283752860412, 'number': 220} | {'precision': 0.7, 'recall': 0.7, 'f1': 0.7, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.914396887159533, 'recall': 0.9362549800796812, 'f1': 0.9251968503937008, 'number': 251} | {'precision': 0.8888888888888888, 'recall': 0.7272727272727273, 'f1': 0.7999999999999999, 'number': 11} | {'precision': 0.9606299212598425, 'recall': 0.991869918699187, 'f1': 0.976, 'number': 246} | {'precision': 0.8947368421052632, 'recall': 1.0, 'f1': 0.9444444444444444, 'number': 17} | {'precision': 0.6666666666666666, 'recall': 0.9032258064516129, 'f1': 0.7671232876712328, 'number': 31} | {'precision': 0.8947368421052632, 'recall': 0.85, 'f1': 0.8717948717948718, 'number': 20} | {'precision': 0.9692307692307692, 'recall': 0.9402985074626866, 'f1': 0.9545454545454547, 'number': 67} | {'precision': 0.984375, 'recall': 0.9264705882352942, 'f1': 0.9545454545454545, 'number': 68} | {'precision': 0.9649122807017544, 'recall': 0.9821428571428571, 'f1': 0.9734513274336283, 'number': 56} | {'precision': 0.8823529411764706, 'recall': 0.9375, 'f1': 0.9090909090909091, 'number': 16} | {'precision': 0.5, 'recall': 0.5, 'f1': 0.5, 'number': 2} | {'precision': 0.9, 'recall': 0.9310344827586207, 'f1': 0.9152542372881356, 'number': 29} | {'precision': 1.0, 'recall': 0.5714285714285714, 'f1': 0.7272727272727273, 'number': 7} | {'precision': 0.3333333333333333, 'recall': 0.3333333333333333, 'f1': 0.3333333333333333, 'number': 3} | {'precision': 0.9387755102040817, 'recall': 0.968421052631579, 'f1': 0.9533678756476685, 'number': 95} | {'precision': 0.8571428571428571, 'recall': 0.8571428571428571, 'f1': 0.8571428571428571, 'number': 7} | {'precision': 0.7777777777777778, 'recall': 0.7777777777777778, 'f1': 0.7777777777777778, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9538461538461539, 'recall': 0.9538461538461539, 'f1': 0.9538461538461539, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} | 0.9356 | 0.9428 | 0.9392 | 0.9588 |
| 0.0694 | 34.0 | 1700 | 0.1802 | {'precision': 0.9815668202764977, 'recall': 0.9681818181818181, 'f1': 0.9748283752860412, 'number': 220} | {'precision': 0.7, 'recall': 0.7, 'f1': 0.7, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.9073359073359073, 'recall': 0.9362549800796812, 'f1': 0.9215686274509803, 'number': 251} | {'precision': 1.0, 'recall': 0.7272727272727273, 'f1': 0.8421052631578948, 'number': 11} | {'precision': 0.9566929133858267, 'recall': 0.9878048780487805, 'f1': 0.9719999999999999, 'number': 246} | {'precision': 0.8947368421052632, 'recall': 1.0, 'f1': 0.9444444444444444, 'number': 17} | {'precision': 0.6585365853658537, 'recall': 0.8709677419354839, 'f1': 0.75, 'number': 31} | {'precision': 0.9444444444444444, 'recall': 0.85, 'f1': 0.8947368421052632, 'number': 20} | {'precision': 0.9538461538461539, 'recall': 0.9253731343283582, 'f1': 0.9393939393939394, 'number': 67} | {'precision': 0.984375, 'recall': 0.9264705882352942, 'f1': 0.9545454545454545, 'number': 68} | {'precision': 0.9649122807017544, 'recall': 0.9821428571428571, 'f1': 0.9734513274336283, 'number': 56} | {'precision': 0.8823529411764706, 'recall': 0.9375, 'f1': 0.9090909090909091, 'number': 16} | {'precision': 0.5, 'recall': 0.5, 'f1': 0.5, 'number': 2} | {'precision': 0.9, 'recall': 0.9310344827586207, 'f1': 0.9152542372881356, 'number': 29} | {'precision': 1.0, 'recall': 0.5714285714285714, 'f1': 0.7272727272727273, 'number': 7} | {'precision': 0.3333333333333333, 'recall': 0.3333333333333333, 'f1': 0.3333333333333333, 'number': 3} | {'precision': 0.9387755102040817, 'recall': 0.968421052631579, 'f1': 0.9533678756476685, 'number': 95} | {'precision': 0.8571428571428571, 'recall': 0.8571428571428571, 'f1': 0.8571428571428571, 'number': 7} | {'precision': 0.7777777777777778, 'recall': 0.7777777777777778, 'f1': 0.7777777777777778, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9393939393939394, 'recall': 0.9538461538461539, 'f1': 0.9465648854961831, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} | 0.9333 | 0.9405 | 0.9369 | 0.9596 |
| 0.0654 | 35.0 | 1750 | 0.1791 | {'precision': 0.9815668202764977, 'recall': 0.9681818181818181, 'f1': 0.9748283752860412, 'number': 220} | {'precision': 0.7, 'recall': 0.7, 'f1': 0.7, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.9294117647058824, 'recall': 0.9442231075697212, 'f1': 0.9367588932806324, 'number': 251} | {'precision': 0.8888888888888888, 'recall': 0.7272727272727273, 'f1': 0.7999999999999999, 'number': 11} | {'precision': 0.95703125, 'recall': 0.9959349593495935, 'f1': 0.9760956175298804, 'number': 246} | {'precision': 0.8947368421052632, 'recall': 1.0, 'f1': 0.9444444444444444, 'number': 17} | {'precision': 0.7777777777777778, 'recall': 0.9032258064516129, 'f1': 0.835820895522388, 'number': 31} | {'precision': 0.9444444444444444, 'recall': 0.85, 'f1': 0.8947368421052632, 'number': 20} | {'precision': 0.9846153846153847, 'recall': 0.9552238805970149, 'f1': 0.9696969696969696, 'number': 67} | {'precision': 0.984375, 'recall': 0.9264705882352942, 'f1': 0.9545454545454545, 'number': 68} | {'precision': 0.9649122807017544, 'recall': 0.9821428571428571, 'f1': 0.9734513274336283, 'number': 56} | {'precision': 0.8823529411764706, 'recall': 0.9375, 'f1': 0.9090909090909091, 'number': 16} | {'precision': 0.5, 'recall': 0.5, 'f1': 0.5, 'number': 2} | {'precision': 0.9, 'recall': 0.9310344827586207, 'f1': 0.9152542372881356, 'number': 29} | {'precision': 1.0, 'recall': 0.5714285714285714, 'f1': 0.7272727272727273, 'number': 7} | {'precision': 0.3333333333333333, 'recall': 0.3333333333333333, 'f1': 0.3333333333333333, 'number': 3} | {'precision': 0.9387755102040817, 'recall': 0.968421052631579, 'f1': 0.9533678756476685, 'number': 95} | {'precision': 0.8571428571428571, 'recall': 0.8571428571428571, 'f1': 0.8571428571428571, 'number': 7} | {'precision': 0.7777777777777778, 'recall': 0.7777777777777778, 'f1': 0.7777777777777778, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9393939393939394, 'recall': 0.9538461538461539, 'f1': 0.9465648854961831, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} | 0.9430 | 0.9459 | 0.9444 | 0.9626 |
| 0.0684 | 36.0 | 1800 | 0.1751 | {'precision': 0.981651376146789, 'recall': 0.9727272727272728, 'f1': 0.9771689497716896, 'number': 220} | {'precision': 0.7, 'recall': 0.7, 'f1': 0.7, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.9182879377431906, 'recall': 0.9402390438247012, 'f1': 0.9291338582677166, 'number': 251} | {'precision': 0.8888888888888888, 'recall': 0.7272727272727273, 'f1': 0.7999999999999999, 'number': 11} | {'precision': 0.9566929133858267, 'recall': 0.9878048780487805, 'f1': 0.9719999999999999, 'number': 246} | {'precision': 0.8947368421052632, 'recall': 1.0, 'f1': 0.9444444444444444, 'number': 17} | {'precision': 0.6829268292682927, 'recall': 0.9032258064516129, 'f1': 0.7777777777777778, 'number': 31} | {'precision': 0.9444444444444444, 'recall': 0.85, 'f1': 0.8947368421052632, 'number': 20} | {'precision': 0.9538461538461539, 'recall': 0.9253731343283582, 'f1': 0.9393939393939394, 'number': 67} | {'precision': 0.984375, 'recall': 0.9264705882352942, 'f1': 0.9545454545454545, 'number': 68} | {'precision': 0.9649122807017544, 'recall': 0.9821428571428571, 'f1': 0.9734513274336283, 'number': 56} | {'precision': 0.8823529411764706, 'recall': 0.9375, 'f1': 0.9090909090909091, 'number': 16} | {'precision': 0.5, 'recall': 0.5, 'f1': 0.5, 'number': 2} | {'precision': 0.9, 'recall': 0.9310344827586207, 'f1': 0.9152542372881356, 'number': 29} | {'precision': 1.0, 'recall': 0.5714285714285714, 'f1': 0.7272727272727273, 'number': 7} | {'precision': 0.3333333333333333, 'recall': 0.3333333333333333, 'f1': 0.3333333333333333, 'number': 3} | {'precision': 0.9387755102040817, 'recall': 0.968421052631579, 'f1': 0.9533678756476685, 'number': 95} | {'precision': 0.8571428571428571, 'recall': 0.8571428571428571, 'f1': 0.8571428571428571, 'number': 7} | {'precision': 0.7777777777777778, 'recall': 0.7777777777777778, 'f1': 0.7777777777777778, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9538461538461539, 'recall': 0.9538461538461539, 'f1': 0.9538461538461539, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} | 0.9363 | 0.9428 | 0.9395 | 0.9596 |
| 0.0617 | 37.0 | 1850 | 0.1820 | {'precision': 0.9815668202764977, 'recall': 0.9681818181818181, 'f1': 0.9748283752860412, 'number': 220} | {'precision': 0.7, 'recall': 0.7, 'f1': 0.7, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.9224806201550387, 'recall': 0.9482071713147411, 'f1': 0.9351669941060905, 'number': 251} | {'precision': 1.0, 'recall': 0.7272727272727273, 'f1': 0.8421052631578948, 'number': 11} | {'precision': 0.9568627450980393, 'recall': 0.991869918699187, 'f1': 0.9740518962075848, 'number': 246} | {'precision': 0.8947368421052632, 'recall': 1.0, 'f1': 0.9444444444444444, 'number': 17} | {'precision': 0.7647058823529411, 'recall': 0.8387096774193549, 'f1': 0.7999999999999999, 'number': 31} | {'precision': 0.9444444444444444, 'recall': 0.85, 'f1': 0.8947368421052632, 'number': 20} | {'precision': 0.9692307692307692, 'recall': 0.9402985074626866, 'f1': 0.9545454545454547, 'number': 67} | {'precision': 0.984375, 'recall': 0.9264705882352942, 'f1': 0.9545454545454545, 'number': 68} | {'precision': 0.9649122807017544, 'recall': 0.9821428571428571, 'f1': 0.9734513274336283, 'number': 56} | {'precision': 0.8823529411764706, 'recall': 0.9375, 'f1': 0.9090909090909091, 'number': 16} | {'precision': 0.5, 'recall': 0.5, 'f1': 0.5, 'number': 2} | {'precision': 0.9, 'recall': 0.9310344827586207, 'f1': 0.9152542372881356, 'number': 29} | {'precision': 1.0, 'recall': 0.5714285714285714, 'f1': 0.7272727272727273, 'number': 7} | {'precision': 0.3333333333333333, 'recall': 0.3333333333333333, 'f1': 0.3333333333333333, 'number': 3} | {'precision': 0.9292929292929293, 'recall': 0.968421052631579, 'f1': 0.9484536082474226, 'number': 95} | {'precision': 0.8571428571428571, 'recall': 0.8571428571428571, 'f1': 0.8571428571428571, 'number': 7} | {'precision': 0.7777777777777778, 'recall': 0.7777777777777778, 'f1': 0.7777777777777778, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9393939393939394, 'recall': 0.9538461538461539, 'f1': 0.9465648854961831, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} | 0.9407 | 0.9436 | 0.9421 | 0.9613 |
| 0.0599 | 38.0 | 1900 | 0.1813 | {'precision': 0.9815668202764977, 'recall': 0.9681818181818181, 'f1': 0.9748283752860412, 'number': 220} | {'precision': 0.7, 'recall': 0.7, 'f1': 0.7, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.9186046511627907, 'recall': 0.9442231075697212, 'f1': 0.9312377210216111, 'number': 251} | {'precision': 1.0, 'recall': 0.7272727272727273, 'f1': 0.8421052631578948, 'number': 11} | {'precision': 0.9606299212598425, 'recall': 0.991869918699187, 'f1': 0.976, 'number': 246} | {'precision': 0.8947368421052632, 'recall': 1.0, 'f1': 0.9444444444444444, 'number': 17} | {'precision': 0.7222222222222222, 'recall': 0.8387096774193549, 'f1': 0.7761194029850746, 'number': 31} | {'precision': 0.9473684210526315, 'recall': 0.9, 'f1': 0.9230769230769231, 'number': 20} | {'precision': 0.9692307692307692, 'recall': 0.9402985074626866, 'f1': 0.9545454545454547, 'number': 67} | {'precision': 0.984375, 'recall': 0.9264705882352942, 'f1': 0.9545454545454545, 'number': 68} | {'precision': 0.9649122807017544, 'recall': 0.9821428571428571, 'f1': 0.9734513274336283, 'number': 56} | {'precision': 0.8823529411764706, 'recall': 0.9375, 'f1': 0.9090909090909091, 'number': 16} | {'precision': 0.5, 'recall': 0.5, 'f1': 0.5, 'number': 2} | {'precision': 0.9, 'recall': 0.9310344827586207, 'f1': 0.9152542372881356, 'number': 29} | {'precision': 1.0, 'recall': 0.5714285714285714, 'f1': 0.7272727272727273, 'number': 7} | {'precision': 0.3333333333333333, 'recall': 0.3333333333333333, 'f1': 0.3333333333333333, 'number': 3} | {'precision': 0.9387755102040817, 'recall': 0.968421052631579, 'f1': 0.9533678756476685, 'number': 95} | {'precision': 0.8571428571428571, 'recall': 0.8571428571428571, 'f1': 0.8571428571428571, 'number': 7} | {'precision': 0.7777777777777778, 'recall': 0.7777777777777778, 'f1': 0.7777777777777778, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9393939393939394, 'recall': 0.9538461538461539, 'f1': 0.9465648854961831, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} | 0.9400 | 0.9436 | 0.9418 | 0.9622 |
| 0.0593 | 39.0 | 1950 | 0.1775 | {'precision': 0.9815668202764977, 'recall': 0.9681818181818181, 'f1': 0.9748283752860412, 'number': 220} | {'precision': 0.7, 'recall': 0.7, 'f1': 0.7, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.91796875, 'recall': 0.9362549800796812, 'f1': 0.9270216962524654, 'number': 251} | {'precision': 0.8888888888888888, 'recall': 0.7272727272727273, 'f1': 0.7999999999999999, 'number': 11} | {'precision': 0.9606299212598425, 'recall': 0.991869918699187, 'f1': 0.976, 'number': 246} | {'precision': 0.8947368421052632, 'recall': 1.0, 'f1': 0.9444444444444444, 'number': 17} | {'precision': 0.675, 'recall': 0.8709677419354839, 'f1': 0.7605633802816901, 'number': 31} | {'precision': 0.9473684210526315, 'recall': 0.9, 'f1': 0.9230769230769231, 'number': 20} | {'precision': 0.9692307692307692, 'recall': 0.9402985074626866, 'f1': 0.9545454545454547, 'number': 67} | {'precision': 0.984375, 'recall': 0.9264705882352942, 'f1': 0.9545454545454545, 'number': 68} | {'precision': 0.9649122807017544, 'recall': 0.9821428571428571, 'f1': 0.9734513274336283, 'number': 56} | {'precision': 0.8823529411764706, 'recall': 0.9375, 'f1': 0.9090909090909091, 'number': 16} | {'precision': 0.5, 'recall': 0.5, 'f1': 0.5, 'number': 2} | {'precision': 0.9, 'recall': 0.9310344827586207, 'f1': 0.9152542372881356, 'number': 29} | {'precision': 1.0, 'recall': 0.5714285714285714, 'f1': 0.7272727272727273, 'number': 7} | {'precision': 0.25, 'recall': 0.3333333333333333, 'f1': 0.28571428571428575, 'number': 3} | {'precision': 0.9387755102040817, 'recall': 0.968421052631579, 'f1': 0.9533678756476685, 'number': 95} | {'precision': 0.8571428571428571, 'recall': 0.8571428571428571, 'f1': 0.8571428571428571, 'number': 7} | {'precision': 0.7777777777777778, 'recall': 0.7777777777777778, 'f1': 0.7777777777777778, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9393939393939394, 'recall': 0.9538461538461539, 'f1': 0.9465648854961831, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} | 0.9363 | 0.9428 | 0.9395 | 0.9600 |
| 0.061 | 40.0 | 2000 | 0.1761 | {'precision': 0.9815668202764977, 'recall': 0.9681818181818181, 'f1': 0.9748283752860412, 'number': 220} | {'precision': 0.7, 'recall': 0.7, 'f1': 0.7, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.9108527131782945, 'recall': 0.9362549800796812, 'f1': 0.9233791748526522, 'number': 251} | {'precision': 0.8888888888888888, 'recall': 0.7272727272727273, 'f1': 0.7999999999999999, 'number': 11} | {'precision': 0.9606299212598425, 'recall': 0.991869918699187, 'f1': 0.976, 'number': 246} | {'precision': 0.8947368421052632, 'recall': 1.0, 'f1': 0.9444444444444444, 'number': 17} | {'precision': 0.627906976744186, 'recall': 0.8709677419354839, 'f1': 0.7297297297297297, 'number': 31} | {'precision': 0.9473684210526315, 'recall': 0.9, 'f1': 0.9230769230769231, 'number': 20} | {'precision': 0.9692307692307692, 'recall': 0.9402985074626866, 'f1': 0.9545454545454547, 'number': 67} | {'precision': 0.984375, 'recall': 0.9264705882352942, 'f1': 0.9545454545454545, 'number': 68} | {'precision': 0.9649122807017544, 'recall': 0.9821428571428571, 'f1': 0.9734513274336283, 'number': 56} | {'precision': 0.8823529411764706, 'recall': 0.9375, 'f1': 0.9090909090909091, 'number': 16} | {'precision': 0.5, 'recall': 0.5, 'f1': 0.5, 'number': 2} | {'precision': 0.8666666666666667, 'recall': 0.896551724137931, 'f1': 0.8813559322033899, 'number': 29} | {'precision': 0.6666666666666666, 'recall': 0.5714285714285714, 'f1': 0.6153846153846153, 'number': 7} | {'precision': 0.25, 'recall': 0.3333333333333333, 'f1': 0.28571428571428575, 'number': 3} | {'precision': 0.9387755102040817, 'recall': 0.968421052631579, 'f1': 0.9533678756476685, 'number': 95} | {'precision': 0.8571428571428571, 'recall': 0.8571428571428571, 'f1': 0.8571428571428571, 'number': 7} | {'precision': 0.7777777777777778, 'recall': 0.7777777777777778, 'f1': 0.7777777777777778, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9393939393939394, 'recall': 0.9538461538461539, 'f1': 0.9465648854961831, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} | 0.9305 | 0.9420 | 0.9363 | 0.9592 |
| 0.0559 | 41.0 | 2050 | 0.1781 | {'precision': 0.9815668202764977, 'recall': 0.9681818181818181, 'f1': 0.9748283752860412, 'number': 220} | {'precision': 0.7, 'recall': 0.7, 'f1': 0.7, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.9186046511627907, 'recall': 0.9442231075697212, 'f1': 0.9312377210216111, 'number': 251} | {'precision': 1.0, 'recall': 0.7272727272727273, 'f1': 0.8421052631578948, 'number': 11} | {'precision': 0.9607843137254902, 'recall': 0.9959349593495935, 'f1': 0.9780439121756488, 'number': 246} | {'precision': 0.8947368421052632, 'recall': 1.0, 'f1': 0.9444444444444444, 'number': 17} | {'precision': 0.7297297297297297, 'recall': 0.8709677419354839, 'f1': 0.7941176470588235, 'number': 31} | {'precision': 0.9473684210526315, 'recall': 0.9, 'f1': 0.9230769230769231, 'number': 20} | {'precision': 0.9846153846153847, 'recall': 0.9552238805970149, 'f1': 0.9696969696969696, 'number': 67} | {'precision': 0.984375, 'recall': 0.9264705882352942, 'f1': 0.9545454545454545, 'number': 68} | {'precision': 0.9649122807017544, 'recall': 0.9821428571428571, 'f1': 0.9734513274336283, 'number': 56} | {'precision': 0.8823529411764706, 'recall': 0.9375, 'f1': 0.9090909090909091, 'number': 16} | {'precision': 0.5, 'recall': 0.5, 'f1': 0.5, 'number': 2} | {'precision': 0.8666666666666667, 'recall': 0.896551724137931, 'f1': 0.8813559322033899, 'number': 29} | {'precision': 0.6666666666666666, 'recall': 0.5714285714285714, 'f1': 0.6153846153846153, 'number': 7} | {'precision': 0.3333333333333333, 'recall': 0.3333333333333333, 'f1': 0.3333333333333333, 'number': 3} | {'precision': 0.9387755102040817, 'recall': 0.968421052631579, 'f1': 0.9533678756476685, 'number': 95} | {'precision': 0.8571428571428571, 'recall': 0.8571428571428571, 'f1': 0.8571428571428571, 'number': 7} | {'precision': 0.7777777777777778, 'recall': 0.7777777777777778, 'f1': 0.7777777777777778, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9393939393939394, 'recall': 0.9538461538461539, 'f1': 0.9465648854961831, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} | 0.9386 | 0.9451 | 0.9419 | 0.9622 |
| 0.053 | 42.0 | 2100 | 0.1779 | {'precision': 0.981651376146789, 'recall': 0.9727272727272728, 'f1': 0.9771689497716896, 'number': 220} | {'precision': 0.7, 'recall': 0.7, 'f1': 0.7, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.9224806201550387, 'recall': 0.9482071713147411, 'f1': 0.9351669941060905, 'number': 251} | {'precision': 0.8888888888888888, 'recall': 0.7272727272727273, 'f1': 0.7999999999999999, 'number': 11} | {'precision': 0.9606299212598425, 'recall': 0.991869918699187, 'f1': 0.976, 'number': 246} | {'precision': 0.8947368421052632, 'recall': 1.0, 'f1': 0.9444444444444444, 'number': 17} | {'precision': 0.7222222222222222, 'recall': 0.8387096774193549, 'f1': 0.7761194029850746, 'number': 31} | {'precision': 0.9473684210526315, 'recall': 0.9, 'f1': 0.9230769230769231, 'number': 20} | {'precision': 0.9692307692307692, 'recall': 0.9402985074626866, 'f1': 0.9545454545454547, 'number': 67} | {'precision': 0.984375, 'recall': 0.9264705882352942, 'f1': 0.9545454545454545, 'number': 68} | {'precision': 0.9649122807017544, 'recall': 0.9821428571428571, 'f1': 0.9734513274336283, 'number': 56} | {'precision': 0.8823529411764706, 'recall': 0.9375, 'f1': 0.9090909090909091, 'number': 16} | {'precision': 0.5, 'recall': 0.5, 'f1': 0.5, 'number': 2} | {'precision': 0.8666666666666667, 'recall': 0.896551724137931, 'f1': 0.8813559322033899, 'number': 29} | {'precision': 0.8, 'recall': 0.5714285714285714, 'f1': 0.6666666666666666, 'number': 7} | {'precision': 0.25, 'recall': 0.3333333333333333, 'f1': 0.28571428571428575, 'number': 3} | {'precision': 0.9387755102040817, 'recall': 0.968421052631579, 'f1': 0.9533678756476685, 'number': 95} | {'precision': 0.8571428571428571, 'recall': 0.8571428571428571, 'f1': 0.8571428571428571, 'number': 7} | {'precision': 0.7777777777777778, 'recall': 0.7777777777777778, 'f1': 0.7777777777777778, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9538461538461539, 'recall': 0.9538461538461539, 'f1': 0.9538461538461539, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} | 0.9386 | 0.9444 | 0.9414 | 0.9609 |
| 0.0537 | 43.0 | 2150 | 0.1791 | {'precision': 0.9815668202764977, 'recall': 0.9681818181818181, 'f1': 0.9748283752860412, 'number': 220} | {'precision': 0.7, 'recall': 0.7, 'f1': 0.7, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.9186046511627907, 'recall': 0.9442231075697212, 'f1': 0.9312377210216111, 'number': 251} | {'precision': 1.0, 'recall': 0.7272727272727273, 'f1': 0.8421052631578948, 'number': 11} | {'precision': 0.9606299212598425, 'recall': 0.991869918699187, 'f1': 0.976, 'number': 246} | {'precision': 0.8947368421052632, 'recall': 1.0, 'f1': 0.9444444444444444, 'number': 17} | {'precision': 0.7222222222222222, 'recall': 0.8387096774193549, 'f1': 0.7761194029850746, 'number': 31} | {'precision': 0.9473684210526315, 'recall': 0.9, 'f1': 0.9230769230769231, 'number': 20} | {'precision': 0.9692307692307692, 'recall': 0.9402985074626866, 'f1': 0.9545454545454547, 'number': 67} | {'precision': 0.984375, 'recall': 0.9264705882352942, 'f1': 0.9545454545454545, 'number': 68} | {'precision': 0.9649122807017544, 'recall': 0.9821428571428571, 'f1': 0.9734513274336283, 'number': 56} | {'precision': 0.8823529411764706, 'recall': 0.9375, 'f1': 0.9090909090909091, 'number': 16} | {'precision': 0.5, 'recall': 0.5, 'f1': 0.5, 'number': 2} | {'precision': 0.8387096774193549, 'recall': 0.896551724137931, 'f1': 0.8666666666666666, 'number': 29} | {'precision': 0.6666666666666666, 'recall': 0.5714285714285714, 'f1': 0.6153846153846153, 'number': 7} | {'precision': 0.25, 'recall': 0.3333333333333333, 'f1': 0.28571428571428575, 'number': 3} | {'precision': 0.9387755102040817, 'recall': 0.968421052631579, 'f1': 0.9533678756476685, 'number': 95} | {'precision': 0.8571428571428571, 'recall': 0.8571428571428571, 'f1': 0.8571428571428571, 'number': 7} | {'precision': 0.7777777777777778, 'recall': 0.7777777777777778, 'f1': 0.7777777777777778, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9538461538461539, 'recall': 0.9538461538461539, 'f1': 0.9538461538461539, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} | 0.9370 | 0.9428 | 0.9399 | 0.9609 |
| 0.0528 | 44.0 | 2200 | 0.1768 | {'precision': 0.981651376146789, 'recall': 0.9727272727272728, 'f1': 0.9771689497716896, 'number': 220} | {'precision': 0.7, 'recall': 0.7, 'f1': 0.7, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.921875, 'recall': 0.9402390438247012, 'f1': 0.930966469428008, 'number': 251} | {'precision': 0.8888888888888888, 'recall': 0.7272727272727273, 'f1': 0.7999999999999999, 'number': 11} | {'precision': 0.9607843137254902, 'recall': 0.9959349593495935, 'f1': 0.9780439121756488, 'number': 246} | {'precision': 0.8947368421052632, 'recall': 1.0, 'f1': 0.9444444444444444, 'number': 17} | {'precision': 0.7, 'recall': 0.9032258064516129, 'f1': 0.7887323943661972, 'number': 31} | {'precision': 0.9473684210526315, 'recall': 0.9, 'f1': 0.9230769230769231, 'number': 20} | {'precision': 0.9846153846153847, 'recall': 0.9552238805970149, 'f1': 0.9696969696969696, 'number': 67} | {'precision': 0.984375, 'recall': 0.9264705882352942, 'f1': 0.9545454545454545, 'number': 68} | {'precision': 0.9649122807017544, 'recall': 0.9821428571428571, 'f1': 0.9734513274336283, 'number': 56} | {'precision': 0.8823529411764706, 'recall': 0.9375, 'f1': 0.9090909090909091, 'number': 16} | {'precision': 0.5, 'recall': 0.5, 'f1': 0.5, 'number': 2} | {'precision': 0.8666666666666667, 'recall': 0.896551724137931, 'f1': 0.8813559322033899, 'number': 29} | {'precision': 0.6666666666666666, 'recall': 0.5714285714285714, 'f1': 0.6153846153846153, 'number': 7} | {'precision': 0.25, 'recall': 0.3333333333333333, 'f1': 0.28571428571428575, 'number': 3} | {'precision': 0.9387755102040817, 'recall': 0.968421052631579, 'f1': 0.9533678756476685, 'number': 95} | {'precision': 0.8571428571428571, 'recall': 0.8571428571428571, 'f1': 0.8571428571428571, 'number': 7} | {'precision': 0.7777777777777778, 'recall': 0.7777777777777778, 'f1': 0.7777777777777778, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9538461538461539, 'recall': 0.9538461538461539, 'f1': 0.9538461538461539, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} | 0.9372 | 0.9459 | 0.9415 | 0.9609 |
| 0.0551 | 45.0 | 2250 | 0.1768 | {'precision': 0.981651376146789, 'recall': 0.9727272727272728, 'f1': 0.9771689497716896, 'number': 220} | {'precision': 0.7, 'recall': 0.7, 'f1': 0.7, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.915057915057915, 'recall': 0.9442231075697212, 'f1': 0.9294117647058824, 'number': 251} | {'precision': 0.8888888888888888, 'recall': 0.7272727272727273, 'f1': 0.7999999999999999, 'number': 11} | {'precision': 0.9606299212598425, 'recall': 0.991869918699187, 'f1': 0.976, 'number': 246} | {'precision': 0.8947368421052632, 'recall': 1.0, 'f1': 0.9444444444444444, 'number': 17} | {'precision': 0.6923076923076923, 'recall': 0.8709677419354839, 'f1': 0.7714285714285716, 'number': 31} | {'precision': 0.9473684210526315, 'recall': 0.9, 'f1': 0.9230769230769231, 'number': 20} | {'precision': 0.9692307692307692, 'recall': 0.9402985074626866, 'f1': 0.9545454545454547, 'number': 67} | {'precision': 0.984375, 'recall': 0.9264705882352942, 'f1': 0.9545454545454545, 'number': 68} | {'precision': 0.9649122807017544, 'recall': 0.9821428571428571, 'f1': 0.9734513274336283, 'number': 56} | {'precision': 0.8823529411764706, 'recall': 0.9375, 'f1': 0.9090909090909091, 'number': 16} | {'precision': 0.5, 'recall': 0.5, 'f1': 0.5, 'number': 2} | {'precision': 0.9, 'recall': 0.9310344827586207, 'f1': 0.9152542372881356, 'number': 29} | {'precision': 1.0, 'recall': 0.5714285714285714, 'f1': 0.7272727272727273, 'number': 7} | {'precision': 0.25, 'recall': 0.3333333333333333, 'f1': 0.28571428571428575, 'number': 3} | {'precision': 0.9387755102040817, 'recall': 0.968421052631579, 'f1': 0.9533678756476685, 'number': 95} | {'precision': 0.8571428571428571, 'recall': 0.8571428571428571, 'f1': 0.8571428571428571, 'number': 7} | {'precision': 0.7777777777777778, 'recall': 0.7777777777777778, 'f1': 0.7777777777777778, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9538461538461539, 'recall': 0.9538461538461539, 'f1': 0.9538461538461539, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} | 0.9372 | 0.9451 | 0.9411 | 0.9609 |
| 0.0498 | 46.0 | 2300 | 0.1794 | {'precision': 0.981651376146789, 'recall': 0.9727272727272728, 'f1': 0.9771689497716896, 'number': 220} | {'precision': 0.7, 'recall': 0.7, 'f1': 0.7, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.918918918918919, 'recall': 0.9482071713147411, 'f1': 0.9333333333333333, 'number': 251} | {'precision': 0.8888888888888888, 'recall': 0.7272727272727273, 'f1': 0.7999999999999999, 'number': 11} | {'precision': 0.9604743083003953, 'recall': 0.9878048780487805, 'f1': 0.9739478957915831, 'number': 246} | {'precision': 0.8947368421052632, 'recall': 1.0, 'f1': 0.9444444444444444, 'number': 17} | {'precision': 0.7027027027027027, 'recall': 0.8387096774193549, 'f1': 0.7647058823529411, 'number': 31} | {'precision': 0.9473684210526315, 'recall': 0.9, 'f1': 0.9230769230769231, 'number': 20} | {'precision': 0.9538461538461539, 'recall': 0.9253731343283582, 'f1': 0.9393939393939394, 'number': 67} | {'precision': 0.984375, 'recall': 0.9264705882352942, 'f1': 0.9545454545454545, 'number': 68} | {'precision': 0.9649122807017544, 'recall': 0.9821428571428571, 'f1': 0.9734513274336283, 'number': 56} | {'precision': 0.8823529411764706, 'recall': 0.9375, 'f1': 0.9090909090909091, 'number': 16} | {'precision': 0.5, 'recall': 0.5, 'f1': 0.5, 'number': 2} | {'precision': 0.8709677419354839, 'recall': 0.9310344827586207, 'f1': 0.9, 'number': 29} | {'precision': 0.8, 'recall': 0.5714285714285714, 'f1': 0.6666666666666666, 'number': 7} | {'precision': 0.3333333333333333, 'recall': 0.3333333333333333, 'f1': 0.3333333333333333, 'number': 3} | {'precision': 0.9387755102040817, 'recall': 0.968421052631579, 'f1': 0.9533678756476685, 'number': 95} | {'precision': 0.8571428571428571, 'recall': 0.8571428571428571, 'f1': 0.8571428571428571, 'number': 7} | {'precision': 0.7777777777777778, 'recall': 0.7777777777777778, 'f1': 0.7777777777777778, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9538461538461539, 'recall': 0.9538461538461539, 'f1': 0.9538461538461539, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} | 0.9371 | 0.9436 | 0.9403 | 0.9609 |
| 0.0499 | 47.0 | 2350 | 0.1771 | {'precision': 0.981651376146789, 'recall': 0.9727272727272728, 'f1': 0.9771689497716896, 'number': 220} | {'precision': 0.7, 'recall': 0.7, 'f1': 0.7, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.9224806201550387, 'recall': 0.9482071713147411, 'f1': 0.9351669941060905, 'number': 251} | {'precision': 0.8888888888888888, 'recall': 0.7272727272727273, 'f1': 0.7999999999999999, 'number': 11} | {'precision': 0.9606299212598425, 'recall': 0.991869918699187, 'f1': 0.976, 'number': 246} | {'precision': 0.8947368421052632, 'recall': 1.0, 'f1': 0.9444444444444444, 'number': 17} | {'precision': 0.7297297297297297, 'recall': 0.8709677419354839, 'f1': 0.7941176470588235, 'number': 31} | {'precision': 0.9473684210526315, 'recall': 0.9, 'f1': 0.9230769230769231, 'number': 20} | {'precision': 0.9692307692307692, 'recall': 0.9402985074626866, 'f1': 0.9545454545454547, 'number': 67} | {'precision': 0.984375, 'recall': 0.9264705882352942, 'f1': 0.9545454545454545, 'number': 68} | {'precision': 0.9649122807017544, 'recall': 0.9821428571428571, 'f1': 0.9734513274336283, 'number': 56} | {'precision': 0.8823529411764706, 'recall': 0.9375, 'f1': 0.9090909090909091, 'number': 16} | {'precision': 0.5, 'recall': 0.5, 'f1': 0.5, 'number': 2} | {'precision': 0.8666666666666667, 'recall': 0.896551724137931, 'f1': 0.8813559322033899, 'number': 29} | {'precision': 0.6666666666666666, 'recall': 0.5714285714285714, 'f1': 0.6153846153846153, 'number': 7} | {'precision': 0.3333333333333333, 'recall': 0.3333333333333333, 'f1': 0.3333333333333333, 'number': 3} | {'precision': 0.9387755102040817, 'recall': 0.968421052631579, 'f1': 0.9533678756476685, 'number': 95} | {'precision': 0.8571428571428571, 'recall': 0.8571428571428571, 'f1': 0.8571428571428571, 'number': 7} | {'precision': 0.7777777777777778, 'recall': 0.7777777777777778, 'f1': 0.7777777777777778, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9538461538461539, 'recall': 0.9538461538461539, 'f1': 0.9538461538461539, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} | 0.9386 | 0.9451 | 0.9419 | 0.9609 |
| 0.0524 | 48.0 | 2400 | 0.1752 | {'precision': 0.981651376146789, 'recall': 0.9727272727272728, 'f1': 0.9771689497716896, 'number': 220} | {'precision': 0.7, 'recall': 0.7, 'f1': 0.7, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.9182879377431906, 'recall': 0.9402390438247012, 'f1': 0.9291338582677166, 'number': 251} | {'precision': 0.8888888888888888, 'recall': 0.7272727272727273, 'f1': 0.7999999999999999, 'number': 11} | {'precision': 0.9606299212598425, 'recall': 0.991869918699187, 'f1': 0.976, 'number': 246} | {'precision': 0.8947368421052632, 'recall': 1.0, 'f1': 0.9444444444444444, 'number': 17} | {'precision': 0.675, 'recall': 0.8709677419354839, 'f1': 0.7605633802816901, 'number': 31} | {'precision': 0.9473684210526315, 'recall': 0.9, 'f1': 0.9230769230769231, 'number': 20} | {'precision': 0.9692307692307692, 'recall': 0.9402985074626866, 'f1': 0.9545454545454547, 'number': 67} | {'precision': 0.984375, 'recall': 0.9264705882352942, 'f1': 0.9545454545454545, 'number': 68} | {'precision': 0.9649122807017544, 'recall': 0.9821428571428571, 'f1': 0.9734513274336283, 'number': 56} | {'precision': 0.8823529411764706, 'recall': 0.9375, 'f1': 0.9090909090909091, 'number': 16} | {'precision': 0.5, 'recall': 0.5, 'f1': 0.5, 'number': 2} | {'precision': 0.8666666666666667, 'recall': 0.896551724137931, 'f1': 0.8813559322033899, 'number': 29} | {'precision': 0.6666666666666666, 'recall': 0.5714285714285714, 'f1': 0.6153846153846153, 'number': 7} | {'precision': 0.3333333333333333, 'recall': 0.3333333333333333, 'f1': 0.3333333333333333, 'number': 3} | {'precision': 0.9387755102040817, 'recall': 0.968421052631579, 'f1': 0.9533678756476685, 'number': 95} | {'precision': 0.8571428571428571, 'recall': 0.8571428571428571, 'f1': 0.8571428571428571, 'number': 7} | {'precision': 0.7, 'recall': 0.7777777777777778, 'f1': 0.7368421052631577, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9393939393939394, 'recall': 0.9538461538461539, 'f1': 0.9465648854961831, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} | 0.9342 | 0.9436 | 0.9389 | 0.9609 |
| 0.0502 | 49.0 | 2450 | 0.1770 | {'precision': 0.981651376146789, 'recall': 0.9727272727272728, 'f1': 0.9771689497716896, 'number': 220} | {'precision': 0.7, 'recall': 0.7, 'f1': 0.7, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.9224806201550387, 'recall': 0.9482071713147411, 'f1': 0.9351669941060905, 'number': 251} | {'precision': 0.8888888888888888, 'recall': 0.7272727272727273, 'f1': 0.7999999999999999, 'number': 11} | {'precision': 0.9606299212598425, 'recall': 0.991869918699187, 'f1': 0.976, 'number': 246} | {'precision': 0.8947368421052632, 'recall': 1.0, 'f1': 0.9444444444444444, 'number': 17} | {'precision': 0.7297297297297297, 'recall': 0.8709677419354839, 'f1': 0.7941176470588235, 'number': 31} | {'precision': 0.9473684210526315, 'recall': 0.9, 'f1': 0.9230769230769231, 'number': 20} | {'precision': 0.9692307692307692, 'recall': 0.9402985074626866, 'f1': 0.9545454545454547, 'number': 67} | {'precision': 0.984375, 'recall': 0.9264705882352942, 'f1': 0.9545454545454545, 'number': 68} | {'precision': 0.9649122807017544, 'recall': 0.9821428571428571, 'f1': 0.9734513274336283, 'number': 56} | {'precision': 0.8823529411764706, 'recall': 0.9375, 'f1': 0.9090909090909091, 'number': 16} | {'precision': 0.5, 'recall': 0.5, 'f1': 0.5, 'number': 2} | {'precision': 0.8709677419354839, 'recall': 0.9310344827586207, 'f1': 0.9, 'number': 29} | {'precision': 0.8, 'recall': 0.5714285714285714, 'f1': 0.6666666666666666, 'number': 7} | {'precision': 0.3333333333333333, 'recall': 0.3333333333333333, 'f1': 0.3333333333333333, 'number': 3} | {'precision': 0.9387755102040817, 'recall': 0.968421052631579, 'f1': 0.9533678756476685, 'number': 95} | {'precision': 0.8571428571428571, 'recall': 0.8571428571428571, 'f1': 0.8571428571428571, 'number': 7} | {'precision': 0.7777777777777778, 'recall': 0.7777777777777778, 'f1': 0.7777777777777778, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9393939393939394, 'recall': 0.9538461538461539, 'f1': 0.9465648854961831, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} | 0.9387 | 0.9459 | 0.9423 | 0.9622 |
| 0.05 | 50.0 | 2500 | 0.1775 | {'precision': 0.981651376146789, 'recall': 0.9727272727272728, 'f1': 0.9771689497716896, 'number': 220} | {'precision': 0.7, 'recall': 0.7, 'f1': 0.7, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.918918918918919, 'recall': 0.9482071713147411, 'f1': 0.9333333333333333, 'number': 251} | {'precision': 0.8888888888888888, 'recall': 0.7272727272727273, 'f1': 0.7999999999999999, 'number': 11} | {'precision': 0.9606299212598425, 'recall': 0.991869918699187, 'f1': 0.976, 'number': 246} | {'precision': 0.8947368421052632, 'recall': 1.0, 'f1': 0.9444444444444444, 'number': 17} | {'precision': 0.7027027027027027, 'recall': 0.8387096774193549, 'f1': 0.7647058823529411, 'number': 31} | {'precision': 0.9473684210526315, 'recall': 0.9, 'f1': 0.9230769230769231, 'number': 20} | {'precision': 0.9692307692307692, 'recall': 0.9402985074626866, 'f1': 0.9545454545454547, 'number': 67} | {'precision': 0.984375, 'recall': 0.9264705882352942, 'f1': 0.9545454545454545, 'number': 68} | {'precision': 0.9649122807017544, 'recall': 0.9821428571428571, 'f1': 0.9734513274336283, 'number': 56} | {'precision': 0.8823529411764706, 'recall': 0.9375, 'f1': 0.9090909090909091, 'number': 16} | {'precision': 0.5, 'recall': 0.5, 'f1': 0.5, 'number': 2} | {'precision': 0.8666666666666667, 'recall': 0.896551724137931, 'f1': 0.8813559322033899, 'number': 29} | {'precision': 0.6666666666666666, 'recall': 0.5714285714285714, 'f1': 0.6153846153846153, 'number': 7} | {'precision': 0.3333333333333333, 'recall': 0.3333333333333333, 'f1': 0.3333333333333333, 'number': 3} | {'precision': 0.9387755102040817, 'recall': 0.968421052631579, 'f1': 0.9533678756476685, 'number': 95} | {'precision': 0.8571428571428571, 'recall': 0.8571428571428571, 'f1': 0.8571428571428571, 'number': 7} | {'precision': 0.7777777777777778, 'recall': 0.7777777777777778, 'f1': 0.7777777777777778, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 1.0, 'f1': 0.9600000000000001, 'number': 12} | {'precision': 0.9393939393939394, 'recall': 0.9538461538461539, 'f1': 0.9465648854961831, 'number': 65} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} | 0.9364 | 0.9444 | 0.9404 | 0.9609 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
cs6220-ai-gradescope-grader/cs2200-llama-3.2-1B-instruct-no-custom-trainer | cs6220-ai-gradescope-grader | "2024-12-01T10:43:17Z" | 142 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-01T10:42:24Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
damgomz/ft_16_4e6_x1 | damgomz | "2024-07-13T09:12:20Z" | 18 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-21T16:02:50Z" | ---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 84244.64056110382 |
| Emissions (Co2eq in kg) | 0.0509776052431713 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.9945509062005448 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0877531901704767 |
| Consumed energy (kWh) | 1.082304096371019 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.1621709330801248 |
| Emissions (Co2eq in kg) | 0.032995817553098994 |
## Note
12 juillet 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | damgomz/fp_bs32_lr1e4_x1 |
| model_name | ft_16_4e6_x1 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 4e-06 |
| batch_size | 16 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.699963 | 0.316141 |
| 1 | 0.336513 | 0.211741 | 0.918781 |
| 2 | 0.177215 | 0.190896 | 0.929324 |
| 3 | 0.135976 | 0.192683 | 0.924569 |
| 4 | 0.098041 | 0.233538 | 0.935565 |
| 5 | 0.063898 | 0.250738 | 0.909054 |
| 6 | 0.037655 | 0.277108 | 0.929590 |
|
philip-hightech/af5e3c4c-6f7b-42fc-bc1c-81b13e634b12 | philip-hightech | "2025-01-26T07:03:19Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"phi",
"axolotl",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | "2025-01-26T07:01:25Z" | ---
library_name: peft
license: mit
base_model: microsoft/phi-2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: af5e3c4c-6f7b-42fc-bc1c-81b13e634b12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/phi-2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 583c436b7dfe63c6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/583c436b7dfe63c6_train_data.json
type:
field_input: description
field_instruction: name
field_output: symptoms
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: philip-hightech/af5e3c4c-6f7b-42fc-bc1c-81b13e634b12
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/583c436b7dfe63c6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 26cb4ebb-3df7-4431-a684-59b1c72c5755
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 26cb4ebb-3df7-4431-a684-59b1c72c5755
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# af5e3c4c-6f7b-42fc-bc1c-81b13e634b12
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.1151
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.4699 | 0.0083 | 1 | 5.2473 |
| 2.1896 | 0.0248 | 3 | 5.2549 |
| 2.2403 | 0.0496 | 6 | 5.2332 |
| 2.0356 | 0.0744 | 9 | 5.1151 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Envoid/Dendrite-session3-grimpep-remerge-22B-ggml | Envoid | "2023-07-29T07:30:53Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-07-29T06:53:57Z" | # Warning: This model is unpredictable and may output anything on a dime.
I ran an additional 4.4 megabytes of raw corpus on Dendrite and then re-merged it with grimpep's latest 22B super merge in order to correct its disorder.
It's a very 'cerebral' model which is pretty good at story writing. It has difficulty responding to instruct prompts now so might give better results with completion prompts.
Will probably upload the FP-16 version at a later date. |
Mbian/xxx_TinyLLaMA_medical_sft | Mbian | "2025-03-09T11:05:17Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-09T11:05:05Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dadashzadeh/bert-base-news-or-informational-nft-english | dadashzadeh | "2024-07-25T00:03:52Z" | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:dadashzadeh/news-or-informational-nft",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-12-14T20:58:29Z" | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-Classification-news-or-informational-nft
results: []
datasets:
- dadashzadeh/news-or-informational-nft
language:
- en
pipeline_tag: text-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-Classification-news-or-informational-nft
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0009
- Accuracy: 0.9998
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.043 | 1.0 | 1175 | 0.0082 | 0.9984 |
| 0.0151 | 2.0 | 2350 | 0.0038 | 0.9994 |
| 0.0053 | 3.0 | 3525 | 0.0009 | 0.9998 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0 |
tenich/Reinforce-PixelCopter | tenich | "2023-04-02T23:52:29Z" | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-04-02T17:08:07Z" | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 43.10 +/- 29.77
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
isspek/xlnet-base-cased_covid_top3_3_2e-5_16_undersampling_0.5 | isspek | "2024-12-26T14:14:20Z" | 120 | 0 | transformers | [
"transformers",
"safetensors",
"xlnet",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-26T14:14:01Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
HPLT/hplt_bert_base_is | HPLT | "2024-11-24T19:13:17Z" | 137 | 0 | transformers | [
"transformers",
"pytorch",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"is",
"dataset:HPLT/hplt_monolingual_v1_2",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | fill-mask | "2024-04-22T01:22:54Z" | ---
language:
- is
inference: false
tags:
- BERT
- HPLT
- encoder
license: apache-2.0
datasets:
- HPLT/hplt_monolingual_v1_2
---
# HPLT Bert for Icelandic
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language model. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_is")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_is", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Intermediate checkpoints
We are releasing 10 intermediate checkpoints for each model at intervals of every 3125 training steps in separate branches. The naming convention is `stepXXX`: for example, `step18750`.
You can load a specific model revision with `transformers` using the argument `revision`:
```python
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_is", revision="step21875", trust_remote_code=True)
```
You can access all the revisions for the models with the following code:
```python
from huggingface_hub import list_repo_refs
out = list_repo_refs("HPLT/hplt_bert_base_is")
print([b.name for b in out.branches])
```
## Cite us
```bibtex
@inproceedings{samuel-etal-2023-trained,
title = "Trained on 100 million words and still in shape: {BERT} meets {B}ritish {N}ational {C}orpus",
author = "Samuel, David and
Kutuzov, Andrey and
{\O}vrelid, Lilja and
Velldal, Erik",
editor = "Vlachos, Andreas and
Augenstein, Isabelle",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2023",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-eacl.146",
doi = "10.18653/v1/2023.findings-eacl.146",
pages = "1954--1974"
})
```
```bibtex
@inproceedings{de-gibert-etal-2024-new-massive,
title = "A New Massive Multilingual Dataset for High-Performance Language Technologies",
author = {de Gibert, Ona and
Nail, Graeme and
Arefyev, Nikolay and
Ba{\~n}{\'o}n, Marta and
van der Linde, Jelmer and
Ji, Shaoxiong and
Zaragoza-Bernabeu, Jaume and
Aulamo, Mikko and
Ram{\'\i}rez-S{\'a}nchez, Gema and
Kutuzov, Andrey and
Pyysalo, Sampo and
Oepen, Stephan and
Tiedemann, J{\"o}rg},
editor = "Calzolari, Nicoletta and
Kan, Min-Yen and
Hoste, Veronique and
Lenci, Alessandro and
Sakti, Sakriani and
Xue, Nianwen",
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lrec-main.100",
pages = "1116--1128",
abstract = "We present the HPLT (High Performance Language Technologies) language resources, a new massive multilingual dataset including both monolingual and bilingual corpora extracted from CommonCrawl and previously unused web crawls from the Internet Archive. We describe our methods for data acquisition, management and processing of large corpora, which rely on open-source software tools and high-performance computing. Our monolingual collection focuses on low- to medium-resourced languages and covers 75 languages and a total of {\mbox{$\approx$}} 5.6 trillion word tokens de-duplicated on the document level. Our English-centric parallel corpus is derived from its monolingual counterpart and covers 18 language pairs and more than 96 million aligned sentence pairs with roughly 1.4 billion English tokens. The HPLT language resources are one of the largest open text corpora ever released, providing a great resource for language modeling and machine translation training. We publicly release the corpora, the software, and the tools used in this work.",
}
```
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e1_s55555_v4_l5_v50 | KingKazma | "2023-08-13T20:45:13Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-08-13T20:12:11Z" | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
ben-wycliff/sexed-llama2-7b-sft-lora-v1 | ben-wycliff | "2024-05-24T18:37:28Z" | 4 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | "2024-05-22T20:02:08Z" | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
dkoh12/distilbert-base-uncased-finetuned_emotion | dkoh12 | "2023-04-01T02:55:52Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-04-01T02:48:58Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned_emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9230506440647792
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned_emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2168
- Accuracy: 0.923
- F1: 0.9231
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8702 | 1.0 | 250 | 0.3219 | 0.9055 | 0.9026 |
| 0.2588 | 2.0 | 500 | 0.2168 | 0.923 | 0.9231 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
vegaandre/FineTunedModel8_Menu | vegaandre | "2024-06-20T04:40:58Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-06-20T04:39:33Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
michaelw37/sc41 | michaelw37 | "2024-04-19T17:00:02Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-19T16:58:32Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
chotikap/t5-end2end-questions-generation | chotikap | "2023-04-26T13:42:51Z" | 161 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:squad_modified_for_t5_qg",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-04-26T12:04:38Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_modified_for_t5_qg
model-index:
- name: t5-end2end-questions-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-end2end-questions-generation
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the squad_modified_for_t5_qg dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5681
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5733 | 0.34 | 100 | 1.9072 |
| 1.9659 | 0.68 | 200 | 1.7279 |
| 1.8436 | 1.02 | 300 | 1.6666 |
| 1.7433 | 1.35 | 400 | 1.6389 |
| 1.7143 | 1.69 | 500 | 1.6149 |
| 1.6904 | 2.03 | 600 | 1.6086 |
| 1.6305 | 2.37 | 700 | 1.5930 |
| 1.6268 | 2.71 | 800 | 1.5896 |
| 1.6151 | 3.05 | 900 | 1.5926 |
| 1.5712 | 3.39 | 1000 | 1.5857 |
| 1.5671 | 3.73 | 1100 | 1.5736 |
| 1.5518 | 4.06 | 1200 | 1.5784 |
| 1.5372 | 4.4 | 1300 | 1.5825 |
| 1.5244 | 4.74 | 1400 | 1.5702 |
| 1.5178 | 5.08 | 1500 | 1.5708 |
| 1.4954 | 5.42 | 1600 | 1.5712 |
| 1.4866 | 5.76 | 1700 | 1.5692 |
| 1.5027 | 6.1 | 1800 | 1.5685 |
| 1.4778 | 6.44 | 1900 | 1.5712 |
| 1.477 | 6.77 | 2000 | 1.5681 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
mehdie/Extended-mBERT | mehdie | "2024-05-12T16:23:13Z" | 118 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-uncased",
"base_model:finetune:google-bert/bert-base-multilingual-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2024-05-12T00:02:31Z" | ---
license: apache-2.0
base_model: google-bert/bert-base-multilingual-uncased
tags:
- generated_from_trainer
model-index:
- name: MEHDIE_mBERT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MEHDIE_mBERT
This model is a fine-tuned version of [google-bert/bert-base-multilingual-uncased](https://huggingface.co/google-bert/bert-base-multilingual-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0261
- Perplexity: 2.79
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 512
- total_eval_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 1.5096 | 1.0 | 51630 | 1.2459 |
| 1.2498 | 2.0 | 103260 | 1.1339 |
| 1.1693 | 3.0 | 154890 | 1.0784 |
| 1.1233 | 4.0 | 206520 | 1.0425 |
| 1.0951 | 5.0 | 258150 | 1.0263 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.3.0a0+ebedce2
- Datasets 2.17.1
- Tokenizers 0.15.2
|
nicklashansen/tdmpc2 | nicklashansen | "2023-10-26T01:03:00Z" | 0 | 13 | null | [
"reinforcement learning",
"world model",
"continuous control",
"robotics",
"reinforcement-learning",
"arxiv:2310.16828",
"arxiv:1910.09700",
"license:mit",
"region:us"
] | reinforcement-learning | "2023-10-23T18:46:55Z" | ---
license: mit
tags:
- reinforcement learning
- world model
- continuous control
- robotics
pipeline_tag: reinforcement-learning
---
# Model Card for TD-MPC2
Official release of TD-MPC2 model checkpoints for the paper
[Scalable, Robust World Models for Continuous Control](https://nicklashansen.github.io/td-mpc2) by
[Nicklas Hansen](https://nicklashansen.github.io), [Hao Su](https://cseweb.ucsd.edu/~haosu)\*, [Xiaolong Wang](https://xiaolonw.github.io)\* (UC San Diego)
**Quick links:** [[Website]](https://nicklashansen.github.io/td-mpc2) [[Paper]](https://arxiv.org/abs/2310.16828) [[Dataset]](https://huggingface.co/datasets/nicklashansen/tdmpc2)
## Model Details
We open-source a total of 324 TD-MPC2 model checkpoints, including 12 multi-task models (ranging from 1M to 317M parameters) trained on 80, 70, and 30 tasks, respectively. We are excited to see what the community will do with these models, and hope that our release will encourage other research labs to open-source their checkpoints as well. This section aims to provide further details about the released models.
### Model Description
- **Developed by:** [Nicklas Hansen](https://nicklashansen.github.io) (UC San Diego)
- **Model type:** TD-MPC2 models trained on tasks from DMControl, Meta-World, Maniskill2, and MyoSuite.
- **License:** MIT
### Model Sources
- **Repository:** [https://github.com/nicklashansen/tdmpc2](https://github.com/nicklashansen/tdmpc2)
- **Paper:** [https://arxiv.org/abs/2310.16828](https://arxiv.org/abs/2310.16828)
## Uses
As one of the first major releases of model checkpoints for reinforcement learning, use of our TD-MPC2 checkpoints is fairly open-ended. We envision that our checkpoints will be useful for researchers interested in training, finetuning, evaluating, and analyzing models on any of the 104 continuous control tasks that we release models for. However, we also expect the community to discover new use cases for these checkpoints.
### Direct Use
Model checkpoints can be loaded using the [official implementation](https://github.com/nicklashansen/tdmpc2), and then be used to reproduce our results and/or generate trajectories for any of the supported tasks.
### Out-of-Scope Use
We do not expect our model checkpoints to generalize to new (unseen) tasks as is. Such model usage will most likely require some amount of fine-tuning on target task data.
## How to Get Started with the Models
Refer to the [official implementation](https://github.com/nicklashansen/tdmpc2) for installation instructions and example usage.
## Training Details
We describe the training procedure for single-task and multi-task model checkpoints in the following.
### Training Procedure (Single-task)
Single-task checkpoints are trained using the [official implementation](https://github.com/nicklashansen/tdmpc2) with default hyperparameters. All models have 5M parameters. Most, but not all, models are trained until convergence. Refer to the individual task curves in our [paper](https://arxiv.org/abs/2310.16828) for a detailed breakdown of model performance on each task.
### Training Procedure (Multi-task)
Multi-task checkpoints are trained using the [official implementation](https://github.com/nicklashansen/tdmpc2) with `batch_size=1024` and otherwise default hyperparameters. We release checkpoints trained on the 80-task and 30-task datasets provided [here](https://huggingface.co/datasets/nicklashansen/tdmpc2), as well as a 70-task dataset that is obtained by filtering the 80-task dataset based on task IDs. We release model checkpoints ranging from 1M to 317M parameters.
## Environmental Impact
Carbon emissions are estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** NVIDIA GeForce RTX 3090
- **Hours used:** Approx. 50,000
- **Provider:** Private infrastructure
- **Carbon Emitted:** Approx. 7560 kg CO2eq
## Citation
If you find our work useful, please consider citing the paper as follows:
**BibTeX:**
```
@misc{hansen2023tdmpc2,
title={TD-MPC2: Scalable, Robust World Models for Continuous Control},
author={Nicklas Hansen and Hao Su and Xiaolong Wang},
year={2023},
eprint={2310.16828},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
## Contact
Correspondence to: [Nicklas Hansen](https://nicklashansen.github.io) |
jonatasgrosman/exp_w2v2t_fr_vp-100k_s509 | jonatasgrosman | "2022-07-08T23:17:07Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-07-08T23:16:21Z" | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_fr_vp-100k_s509
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
TanmaySah/medium | TanmaySah | "2023-09-29T23:05:37Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-09-29T16:55:59Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
|
slimaneMakh/BinarySuperClass_Lease_tableClassification_27may_distilBert_BASELINE | slimaneMakh | "2024-05-27T12:58:31Z" | 181 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-05-27T12:58:21Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jjundol/results | jjundol | "2025-02-28T01:57:29Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:klue/roberta-base",
"base_model:finetune:klue/roberta-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-02-28T01:57:03Z" | ---
library_name: transformers
base_model: klue/roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [klue/roberta-base](https://huggingface.co/klue/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5099
- Accuracy: 0.843
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5354 | 1.0 | 1250 | 0.5410 | 0.852 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.6.0+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
John6666/wai-real-e-v2-sdxl | John6666 | "2024-08-31T21:55:42Z" | 219 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"western-style",
"pony",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-08-31T21:48:24Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- western-style
- pony
---
Original model is [here](https://civitai.com/models/582519/wai-reale?modelVersionId=790287).
This model created by [WAI0731](https://civitai.com/user/WAI0731).
|
mradermacher/Berry_v2_7B-i1-GGUF | mradermacher | "2024-08-02T09:58:26Z" | 11 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"en",
"base_model:jeiku/Berry_v2_7B",
"base_model:quantized:jeiku/Berry_v2_7B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-07-02T22:38:34Z" | ---
base_model: jeiku/Berry_v2_7B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/jeiku/Berry_v2_7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Berry_v2_7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Berry_v2_7B-i1-GGUF/resolve/main/Berry_v2_7B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Berry_v2_7B-i1-GGUF/resolve/main/Berry_v2_7B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Berry_v2_7B-i1-GGUF/resolve/main/Berry_v2_7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Berry_v2_7B-i1-GGUF/resolve/main/Berry_v2_7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Berry_v2_7B-i1-GGUF/resolve/main/Berry_v2_7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Berry_v2_7B-i1-GGUF/resolve/main/Berry_v2_7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Berry_v2_7B-i1-GGUF/resolve/main/Berry_v2_7B.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Berry_v2_7B-i1-GGUF/resolve/main/Berry_v2_7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Berry_v2_7B-i1-GGUF/resolve/main/Berry_v2_7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Berry_v2_7B-i1-GGUF/resolve/main/Berry_v2_7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Berry_v2_7B-i1-GGUF/resolve/main/Berry_v2_7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Berry_v2_7B-i1-GGUF/resolve/main/Berry_v2_7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Berry_v2_7B-i1-GGUF/resolve/main/Berry_v2_7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Berry_v2_7B-i1-GGUF/resolve/main/Berry_v2_7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Berry_v2_7B-i1-GGUF/resolve/main/Berry_v2_7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Berry_v2_7B-i1-GGUF/resolve/main/Berry_v2_7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Berry_v2_7B-i1-GGUF/resolve/main/Berry_v2_7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Berry_v2_7B-i1-GGUF/resolve/main/Berry_v2_7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Berry_v2_7B-i1-GGUF/resolve/main/Berry_v2_7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Berry_v2_7B-i1-GGUF/resolve/main/Berry_v2_7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Berry_v2_7B-i1-GGUF/resolve/main/Berry_v2_7B.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
MaziyarPanahi/mergekit-slerp-jovftfd-GGUF | MaziyarPanahi | "2024-06-15T14:58:47Z" | 30 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Equall/Saul-Base",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:mergekit-community/mergekit-slerp-jovftfd",
"base_model:quantized:mergekit-community/mergekit-slerp-jovftfd"
] | text-generation | "2024-06-15T14:37:46Z" | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- mergekit
- merge
- conversational
- base_model:Equall/Saul-Base
- base_model:HuggingFaceH4/zephyr-7b-beta
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- text-generation
model_name: mergekit-slerp-jovftfd-GGUF
base_model: mergekit-community/mergekit-slerp-jovftfd
inference: false
model_creator: mergekit-community
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/mergekit-slerp-jovftfd-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-jovftfd-GGUF)
- Model creator: [mergekit-community](https://huggingface.co/mergekit-community)
- Original model: [mergekit-community/mergekit-slerp-jovftfd](https://huggingface.co/mergekit-community/mergekit-slerp-jovftfd)
## Description
[MaziyarPanahi/mergekit-slerp-jovftfd-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-jovftfd-GGUF) contains GGUF format model files for [mergekit-community/mergekit-slerp-jovftfd](https://huggingface.co/mergekit-community/mergekit-slerp-jovftfd).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
IvanBrl/asd | IvanBrl | "2024-03-31T18:57:05Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-03-31T18:56:34Z" | #!/usr/bin/env python
from datetime import datetime
from skimage.feature import blob_dog,match_descriptors
from math import sqrt
import cv2
import numpy as np
import scipy
from scipy import ndimage
from scipy.spatial import distance
import glob, os
import math
# Initiate orb detector
orb = cv2.ORB_create(1000)
# create BFMatcher
matcher = cv2.DescriptorMatcher_create(cv2.DescriptorMatcher_BRUTEFORCE_HAMMING)
def sobel_f(im1):
image =im1.astype (int)
# derivatives
dx=ndimage.sobel(image, 1)
dy=ndimage.sobel(image, 0)
mag=np.hypot(dx, dy)
# normalization
mag*= 255.0 / np.max(mag)
sobel_im1 = np.uint8(mag)
return sobel_im1
def dog_f(im1_gray):
blobs_dog = blob_dog(im1_gray, max_sigma=40, threshold=.1)
blobs_dog[:, 2] = blobs_dog[:, 2] * sqrt(2)
return blobs_dog
def show_f(blobs_all):
blob_area =[]
blobs_list = [blobs_all]
for blobs in blobs_list:
for blob in blobs:
y, x, r = blob
area = [y,x,r]
if 2*r > 1:
#print area
blob_area.append(area)
return blob_area
if __name__=='__main__':
i = 0
images = [image for image in sorted(glob.glob('*.jpg'))]
for im in images:
print(im)
start_time = datetime.now()
im1 = cv2.imread (im)
sobel_image = sobel_f(im1)
sobel_gray =cv2.cvtColor(sobel_image, cv2.COLOR_BGR2GRAY)
im2_gray =cv2.cvtColor(im1, cv2.COLOR_BGR2GRAY)
blobs_all = dog_f(sobel_gray)
output = show_f(blobs_all)
clone1 = im1.copy()
key,des = orb.detectAndCompute(im2_gray, None)
#print('keypoints :',len(key),'...',len(des))
src = np.array([]).reshape(-1,1,2)
dst = np.array([]).reshape(-1,1,2)
geom = 0
ll =[]
for b0 in range(0,len(output)):
b0y,b0x,b0r = output[b0]
cv2.circle(clone1, (int(b0x),int(b0y)), int(b0r), (0, 0, 250), 1)
l =[]
kp_1 =[]
ds_1 =[]
l3 =[]
index= 0
for k,d in zip(key,des):
if (k.pt[0] - b0x)**2 + (k.pt[1] - b0y)**2 <= (b0r **2):
l.append(index)
#print('l :',len(l))
kp_1.append(k)
ds_1.append(d)
index+=1
if l:
kp_2= np.delete(key,l,axis=0)
ds_2 = np.delete(des,l,axis=0)
#print('k :',len(kp),'...',len(ds))
#nn_matches = bf.match(np.array(ds_1),ds_2)
nn_matches = matcher.knnMatch(np.array(ds_1), ds_2, 2)
#print(nn_matches)
good = []
#matched1 = []
#matched2 = []
nn_match_ratio = 0.6 # Nearest neighbor matching ratio
for m, n in nn_matches:
#print(m)
#Use 2-nn matches and ratio criterion to find correct keypoint matches
#If the closest match distance is significantly lower than the second closest one, then the match is correct (match is not ambiguous).
if m.distance < nn_match_ratio * n.distance:
#print(x1,y1,x2,y2)
good.append(m)
MIN_MATCH_COUNT = 3
if len(good) > MIN_MATCH_COUNT:
src_pts = np.float32([kp_1[m.queryIdx].pt for m in good]).reshape(-1, 1, 2)
dst_pts = np.float32([kp_2[m.trainIdx].pt for m in good]).reshape(-1, 1, 2)
#src = np.concatenate((src,src_pts))
#dst = np.concatenate((dst,dst_pts))
src = np.array(src_pts).ravel()
dst = np.array(dst_pts).ravel()
ps =np.array(src).reshape((-1,2))
pd =np.array(dst).reshape((-1,2))
for k1,k2 in zip(ps,pd):
cv2.circle(clone1, (int(k1[0]),int(k1[1])),4,(0,0,255),-1)
cv2.circle(clone1, (int(k2[0]),int(k2[1])),4,(0,255,255),-1)
cv2.line(clone1,(int(k1[0]),int(k1[1])),(int(k2[0]),int(k2[1])),(0,255,0),2)
#cv2.imshow('image',clone1)
cv2.imwrite('detectionz-results__'+str(i)+'.png',clone1)
end_time = datetime.now()
print('Duration: {}'.format(end_time - start_time))
i += 1
cv2.waitKey(0)
cv2.destroyAllWindows() |
RichardErkhov/cmarkea_-_bloomz-3b-sft-chat-gguf | RichardErkhov | "2024-08-27T01:15:57Z" | 12 | 0 | null | [
"gguf",
"arxiv:2012.15613",
"arxiv:2001.09977",
"endpoints_compatible",
"region:us"
] | null | "2024-08-26T23:42:21Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
bloomz-3b-sft-chat - GGUF
- Model creator: https://huggingface.co/cmarkea/
- Original model: https://huggingface.co/cmarkea/bloomz-3b-sft-chat/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [bloomz-3b-sft-chat.Q2_K.gguf](https://huggingface.co/RichardErkhov/cmarkea_-_bloomz-3b-sft-chat-gguf/blob/main/bloomz-3b-sft-chat.Q2_K.gguf) | Q2_K | 1.52GB |
| [bloomz-3b-sft-chat.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/cmarkea_-_bloomz-3b-sft-chat-gguf/blob/main/bloomz-3b-sft-chat.IQ3_XS.gguf) | IQ3_XS | 1.68GB |
| [bloomz-3b-sft-chat.IQ3_S.gguf](https://huggingface.co/RichardErkhov/cmarkea_-_bloomz-3b-sft-chat-gguf/blob/main/bloomz-3b-sft-chat.IQ3_S.gguf) | IQ3_S | 1.71GB |
| [bloomz-3b-sft-chat.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/cmarkea_-_bloomz-3b-sft-chat-gguf/blob/main/bloomz-3b-sft-chat.Q3_K_S.gguf) | Q3_K_S | 1.71GB |
| [bloomz-3b-sft-chat.IQ3_M.gguf](https://huggingface.co/RichardErkhov/cmarkea_-_bloomz-3b-sft-chat-gguf/blob/main/bloomz-3b-sft-chat.IQ3_M.gguf) | IQ3_M | 1.81GB |
| [bloomz-3b-sft-chat.Q3_K.gguf](https://huggingface.co/RichardErkhov/cmarkea_-_bloomz-3b-sft-chat-gguf/blob/main/bloomz-3b-sft-chat.Q3_K.gguf) | Q3_K | 1.9GB |
| [bloomz-3b-sft-chat.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/cmarkea_-_bloomz-3b-sft-chat-gguf/blob/main/bloomz-3b-sft-chat.Q3_K_M.gguf) | Q3_K_M | 1.9GB |
| [bloomz-3b-sft-chat.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/cmarkea_-_bloomz-3b-sft-chat-gguf/blob/main/bloomz-3b-sft-chat.Q3_K_L.gguf) | Q3_K_L | 2.02GB |
| [bloomz-3b-sft-chat.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/cmarkea_-_bloomz-3b-sft-chat-gguf/blob/main/bloomz-3b-sft-chat.IQ4_XS.gguf) | IQ4_XS | 2.0GB |
| [bloomz-3b-sft-chat.Q4_0.gguf](https://huggingface.co/RichardErkhov/cmarkea_-_bloomz-3b-sft-chat-gguf/blob/main/bloomz-3b-sft-chat.Q4_0.gguf) | Q4_0 | 2.08GB |
| [bloomz-3b-sft-chat.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/cmarkea_-_bloomz-3b-sft-chat-gguf/blob/main/bloomz-3b-sft-chat.IQ4_NL.gguf) | IQ4_NL | 2.09GB |
| [bloomz-3b-sft-chat.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/cmarkea_-_bloomz-3b-sft-chat-gguf/blob/main/bloomz-3b-sft-chat.Q4_K_S.gguf) | Q4_K_S | 2.09GB |
| [bloomz-3b-sft-chat.Q4_K.gguf](https://huggingface.co/RichardErkhov/cmarkea_-_bloomz-3b-sft-chat-gguf/blob/main/bloomz-3b-sft-chat.Q4_K.gguf) | Q4_K | 2.24GB |
| [bloomz-3b-sft-chat.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/cmarkea_-_bloomz-3b-sft-chat-gguf/blob/main/bloomz-3b-sft-chat.Q4_K_M.gguf) | Q4_K_M | 2.24GB |
| [bloomz-3b-sft-chat.Q4_1.gguf](https://huggingface.co/RichardErkhov/cmarkea_-_bloomz-3b-sft-chat-gguf/blob/main/bloomz-3b-sft-chat.Q4_1.gguf) | Q4_1 | 2.25GB |
| [bloomz-3b-sft-chat.Q5_0.gguf](https://huggingface.co/RichardErkhov/cmarkea_-_bloomz-3b-sft-chat-gguf/blob/main/bloomz-3b-sft-chat.Q5_0.gguf) | Q5_0 | 2.43GB |
| [bloomz-3b-sft-chat.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/cmarkea_-_bloomz-3b-sft-chat-gguf/blob/main/bloomz-3b-sft-chat.Q5_K_S.gguf) | Q5_K_S | 2.43GB |
| [bloomz-3b-sft-chat.Q5_K.gguf](https://huggingface.co/RichardErkhov/cmarkea_-_bloomz-3b-sft-chat-gguf/blob/main/bloomz-3b-sft-chat.Q5_K.gguf) | Q5_K | 2.55GB |
| [bloomz-3b-sft-chat.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/cmarkea_-_bloomz-3b-sft-chat-gguf/blob/main/bloomz-3b-sft-chat.Q5_K_M.gguf) | Q5_K_M | 2.55GB |
| [bloomz-3b-sft-chat.Q5_1.gguf](https://huggingface.co/RichardErkhov/cmarkea_-_bloomz-3b-sft-chat-gguf/blob/main/bloomz-3b-sft-chat.Q5_1.gguf) | Q5_1 | 2.6GB |
| [bloomz-3b-sft-chat.Q6_K.gguf](https://huggingface.co/RichardErkhov/cmarkea_-_bloomz-3b-sft-chat-gguf/blob/main/bloomz-3b-sft-chat.Q6_K.gguf) | Q6_K | 2.8GB |
| [bloomz-3b-sft-chat.Q8_0.gguf](https://huggingface.co/RichardErkhov/cmarkea_-_bloomz-3b-sft-chat-gguf/blob/main/bloomz-3b-sft-chat.Q8_0.gguf) | Q8_0 | 3.62GB |
Original model description:
---
license: bigscience-bloom-rail-1.0
datasets:
- ehartford/wizard_vicuna_70k_unfiltered
- shahules786/orca-chat
- timdettmers/openassistant-guanaco
- laion/OIG
language:
- fr
- en
library_name: transformers
pipeline_tag: text-generation
inference:
parameters:
max_new_tokens: 128
widget:
- text: </s>Bonjour, qui es-tu ?<s>
- text: </s>Hello, who are you?<s>
---
bloomz-3b-sft-chat
--------------------
We introduce the bloomz-3b-sft-chat model, which is a fine-tuning of a Large Language Model (LLM) [bigscience/bloomz-3b](https://huggingface.co/bigscience/bloomz-3b). This model is notable for being pre-trained for a chatbot context and undergoing a transposition from float16 to bfloat16. Therefore, this model serves as a solid starting point for fine-tuning towards other more specific tasks.
The model was trained equally on both French and English data, ensuring maximum efficiency for these two languages (and their interactions). Due to the transition from float16 to bfloat16, we do not guarantee the preservation of the original model's multilingual capabilities. However, fine-tuning can restore reasonable performance on other languages.
The objective is to pre-train all three models (Bloomz-{560m, 3b, 7b1-mt}-sft-chat) to ensure high-performing, energy-efficient, and fast "foundation" models for inference on "realistic" infrastructures suitable for a business with standard industrial capabilities.
Bloomz, through its license, enables free and flexible industrial use. Its tokenizer has been designed with true multi-lingual context in mind, with a significantly lower token generation per word compared to other LLM models. This capability not only leads to improved performance but also enhanced efficiency during inference by making fewer model calls when generating text with shorter contexts. Here is a table illustrating our points using French as an example, where we tokenized Marcel Proust's longest sentence (823 words):
```
Sans honneur que précaire, sans liberté que provisoire, [...], et de façon qu’à eux-mêmes il ne leur paraisse pas un vice.
```
| model | GPT 3.5 | Boris | Flan-T5 | LLaMA | Dolly | MPT | Falcon | Bloomz |
|:--------------:|:-------:|:-----:|:-------:|:-----:|:-----:|:---:|:------:|:------:|
| tokens per word | 2.3 | 2.3 | 2 | 1.9 | 1.9 | 1.9 | 1.8 | 1.4 |
For comparison, with a specialized French tokenizer like [CamemBERT](https://huggingface.co/camembert/camembert-base) or [DistilCamemBERT](cmarkea/distilcamembert-base), we have 1.5 tokens per word. In addition to its positive impact on inference time and resource consumption, there has already been [shown that there is a direct relationship](https://arxiv.org/abs/2012.15613) between the number of tokens per word required for modeling and the predictive performance of the model.
Dataset
-------
After analyzing a substantial set of modelings, we have observed that the most effective pre-training for zero-shot use cases is pre-training for chatbot contexts. This study was conducted internally, focusing specifically on the French context. As a result, we trained the model on a dataset comprising 0.9 billion tokens. This dataset consists of interactions between an individual and a third party. To balance the French and English data, we utilized the Google Translate API.
Training
--------
Here is the table summarizing the architecture used for training, along with the training time and average inference speed per token on the target architecture in tokens per second:
| model | Architecture | Training time (h) | Inference speed (tokens per second) |
|:----------------------:|:-------------:|:-----------------:|:-----------------------------------:|
| [bloomz-560m-sft-chat](https://huggingface.co/cmarkea/bloomz-560m-sft-chat) | 1 x A100 40GB | 41 | 29 |
| [bloomz-3b-sft-chat](https://huggingface.co/cmarkea/bloomz-3b-sft-chat) | 1 x A100 40GB | 140 | 13 |
| [bloomz-7b1-mt-sft-chat](https://huggingface.co/cmarkea/bloomz-7b1-mt-sft-chat) | 4 x A100 40GB | 268 | 8 |
| Hyperparameter | Value |
|:---------------------:|:----------:|
| label smoothing | 0.05 |
| optimizer | AdamW |
| betas | 0.9, 0.999 |
| learning rate | 1e-5 |
| anneal strategy | cos |
| div factor | 100 |
| final div factor | 0.1 |
| batch size | 4 |
| gradient accumulation | 100 |
| max length | 1500 |
Experimentations
----------------
Since the model is trained only on English and French corpora, the performance of the model cannot be guaranteed in other languages. This degradation in performance in other languages is also due to the change in the model's data type from float16 to bfloat16. The conversation example below illustrates this point:
| French | English | Spanish |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| **human:** C'est quoi la première guerre mondiale ?<br>**bot:** La première guerre mondiale a commencé en 1914 et s'est terminée en 1918. C'était une guerre mondiale qui a impliqué de nombreux pays et de nombreuses personnes.<br>**human:** Quels pays ?<br>**bot:** La première guerre mondiale a été déclenchée par la Russie, l'Autriche-Hongrie, la France, la Grande-Bretagne, les États-Unis et l'Allemagne.<br>**human:** Comment cela s'est-il fini et quels ont été ses impacts ?<br>**bot:** La première guerre mondiale s'est terminée en 1918. Il y a eu beaucoup de morts et de destruction.<br>**human:** Combien de mort ?<br>**bot:** Il y a eu environ 100 millions de morts.<br>**human:** Tu es certain de ce chiffre, car ça me parait beaucoup, non ?<br>**bot:** Je suis sûr de ce chiffre. | **human:** What is the first world war?<br>**bot:** The first world war was fought between 1914 and 1918.<br>**human:** Who were the belligerents?<br>**bot:** The belligerents were the United Kingdom, France, Russia, Austria-Hungary, Serbia, and Montenegro.<br>**human:** How did it end and what were its impacts?<br>**bot:** The war ended with the Treaty of Versailles in 1919.<br>**human:** How many deaths?<br>**bot:** The war killed about 100 million people.<br>**human:** Are you sure about this number, because it seems like a lot to me, right?<br>**bot:** I’m not sure, but I think it’s a good estimate. | **human:** ¿Qué es la Primera Guerra Mundial?<br>**bot:** La Primera Guerra Mundial fue un conflicto armado que duró de 1914 a 1918<br>**human:** ¿En qué países participamos?<br>**bot:** La Primera Guerra Mundial fue un conflicto armado que duró de 1914 a 1918. |
Evaluating the performance of generative models can be challenging, and quantitative measures may not always provide the most relevant insights for this type of approach. In this regard, we conducted a qualitative performance study. It involves assessing the relevance of model responses to a pool of French questions using blind evaluation. The metric used for evaluation is the average of two criteria: response accuracy and completeness, similar to what the [SSA metric](https://arxiv.org/abs/2001.09977) aims to capture. Please note that this measure lacks rigorous scientific validity due to the limited number of questions in the test dataset and the fact that it involves only a few individuals with similar socio-demographic characteristics. The prompts take the same format for all models:
```
[Instruction]
Question : [Question]
Réponse :
```
As a result, the prompts do not exploit the structures of chatbot models to ensure fairness, and the evaluation quantifies performance in a purely instruction-based approach. The figure below illustrates the results. The higher a model is positioned in the top-left corner with a small circle radius, the better the model; conversely, if a model is towards the bottom-right with a large circle, it performs less favorably.

We observe that across all models, the performance gain is logarithmic in relation to the increase in model parameters. However, for models that undergo multiple pre-trainings (vanilla, instruction, and chat), models pre-trained on instruction and chat perform significantly better in zero-shot contexts, with a notable improvement for chat-based approaches. The models we have trained demonstrate promising efficiency in this test compared to the number of parameters, indicating cost-effectiveness in a production context.
How to use bloomz-3b-sft-chat
-------------------------------
There are no specific instructions for using these models in a normal causal inference context. However, to leverage the chatbot capability of the model, an individual's prompt should be preceded by the EOS token (</s>), and the generated part should be preceded by the BOS token (<s>). The structure takes the following form:
```
</s>[human prompt 1]<s>[bot answer 1]</s>[human prompt 2]<s>
```
For example, to load the model using the HuggingFace pipeline interface:
```python
from transformers import pipeline
model = pipeline("text-generation", "cmarkea/bloomz-3b-sft-chat")
result = model("</s>C'est quoi le deep learning ?<s>", max_new_tokens=512)
result
[{'generated_text': "</s>C'est quoi le deep learning ?<s>Le deep learning
est un sous-ensemble de l'intelligence artificielle qui utilise des
réseaux de neurones pour apprendre à partir de données."}]
```
Citation
--------
```bibtex
@online{DeBloomzChat,
AUTHOR = {Cyrile Delestre},
URL = {https://huggingface.co/cmarkea/bloomz-3b-sft-chat},
YEAR = {2023},
KEYWORDS = {NLP ; Transformers ; LLM ; Bloomz},
}
```
|
FunAILab/NeCo | FunAILab | "2025-03-30T13:05:55Z" | 0 | 0 | null | [
"computer_vision",
"en",
"arxiv:2408.11054",
"base_model:facebook/dinov2-base",
"base_model:finetune:facebook/dinov2-base",
"license:mit",
"region:us"
] | null | "2025-03-29T19:46:16Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
williamdang/bruhhh | williamdang | "2024-05-07T18:07:36Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-05-07T18:07:34Z" | ---
license: creativeml-openrail-m
---
|
lesso05/768d9d37-47e2-4a24-a3a6-855337d44150 | lesso05 | "2025-01-26T05:10:09Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:defog/sqlcoder-7b-2",
"base_model:adapter:defog/sqlcoder-7b-2",
"license:cc-by-sa-4.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-26T05:05:29Z" | ---
library_name: peft
license: cc-by-sa-4.0
base_model: defog/sqlcoder-7b-2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 768d9d37-47e2-4a24-a3a6-855337d44150
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: defog/sqlcoder-7b-2
bf16: true
chat_template: llama3
datasets:
- data_files:
- 6b30f33bbd9cba22_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6b30f33bbd9cba22_train_data.json
type:
field_input: reasoning
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso05/768d9d37-47e2-4a24-a3a6-855337d44150
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/6b30f33bbd9cba22_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 66ffa688-b6ab-4800-bb73-500be3c51df8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 66ffa688-b6ab-4800-bb73-500be3c51df8
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 768d9d37-47e2-4a24-a3a6-855337d44150
This model is a fine-tuned version of [defog/sqlcoder-7b-2](https://huggingface.co/defog/sqlcoder-7b-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0010 | 1 | nan |
| 0.0 | 0.0048 | 5 | nan |
| 0.0 | 0.0097 | 10 | nan |
| 0.0 | 0.0145 | 15 | nan |
| 0.0 | 0.0194 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mscheny/mine6_0 | mscheny | "2024-04-09T00:02:33Z" | 42 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-08T19:17:06Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Saul98lm/prueba2 | Saul98lm | "2023-09-15T23:36:33Z" | 199 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:beans",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-09-14T03:09:35Z" | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: prueba2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# prueba2
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0071
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1508 | 3.85 | 500 | 0.0071 | 1.0 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
PrunaAI/togethercomputer-RedPajama-INCITE-Chat-3B-v1-HQQ-4bit-smashed | PrunaAI | "2025-02-27T12:27:52Z" | 0 | 0 | null | [
"gpt_neox",
"pruna-ai",
"hqq",
"region:us"
] | null | "2025-02-27T12:25:20Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: ORIGINAL_REPO_NAME
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo ORIGINAL_REPO_NAME installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/togethercomputer-RedPajama-INCITE-Chat-3B-v1-HQQ-4bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/togethercomputer-RedPajama-INCITE-Chat-3B-v1-HQQ-4bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("ORIGINAL_REPO_NAME")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model ORIGINAL_REPO_NAME before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
RichardErkhov/KingNish_-_Reasoning-Llama-1b-v0.1-awq | RichardErkhov | "2024-11-20T17:39:34Z" | 5 | 0 | null | [
"safetensors",
"llama",
"4-bit",
"awq",
"region:us"
] | null | "2024-11-20T17:38:55Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Reasoning-Llama-1b-v0.1 - AWQ
- Model creator: https://huggingface.co/KingNish/
- Original model: https://huggingface.co/KingNish/Reasoning-Llama-1b-v0.1/
Original model description:
---
base_model: meta-llama/Llama-3.2-1B-Instruct
datasets:
- KingNish/reasoning-base-20k
language:
- en
license: llama3.2
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
- reasoning
- llama-3
---
# Model Dexcription
It's First iteration of this model. For testing purpose its just trained on 10k rows.
It performed very well than expected. It do first reasoning and than generate response on based on it but it do like o1.
It do reasoning separately (Just like o1), no tags (like reflection).
Below is inference code.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
MAX_REASONING_TOKENS = 1024
MAX_RESPONSE_TOKENS = 512
model_name = "KingNish/Reasoning-Llama-1b-v0.1"
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Which is greater 9.9 or 9.11 ??"
messages = [
{"role": "user", "content": prompt}
]
# Generate reasoning
reasoning_template = tokenizer.apply_chat_template(messages, tokenize=False, add_reasoning_prompt=True)
reasoning_inputs = tokenizer(reasoning_template, return_tensors="pt").to(model.device)
reasoning_ids = model.generate(**reasoning_inputs, max_new_tokens=MAX_REASONING_TOKENS)
reasoning_output = tokenizer.decode(reasoning_ids[0, reasoning_inputs.input_ids.shape[1]:], skip_special_tokens=True)
# print("REASONING: " + reasoning_output)
# Generate answer
messages.append({"role": "reasoning", "content": reasoning_output})
response_template = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
response_inputs = tokenizer(response_template, return_tensors="pt").to(model.device)
response_ids = model.generate(**response_inputs, max_new_tokens=MAX_RESPONSE_TOKENS)
response_output = tokenizer.decode(response_ids[0, response_inputs.input_ids.shape[1]:], skip_special_tokens=True)
print("ANSWER: " + response_output)
```
- **Trained by:** [Nishith Jain](https://huggingface.co/KingNish)
- **License:** llama3.2
- **Finetuned from model :** [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct)
- **Dataset used :** [KingNish/reasoning-base-20k](https://huggingface.co/datasets/KingNish/reasoning-base-20k)
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
brittlewis12/Mistral-Small-24B-Instruct-2501-reasoning-GGUF | brittlewis12 | "2025-02-17T21:28:17Z" | 0 | 0 | null | [
"gguf",
"reasoning",
"mistral",
"text-generation",
"en",
"dataset:open-r1/OpenR1-Math-220k",
"dataset:simplescaling/s1K-1.1",
"dataset:yentinglin/s1K-1.1-trl-format",
"base_model:yentinglin/Mistral-Small-24B-Instruct-2501-reasoning",
"base_model:quantized:yentinglin/Mistral-Small-24B-Instruct-2501-reasoning",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-02-17T14:59:20Z" | ---
base_model: yentinglin/Mistral-Small-24B-Instruct-2501-reasoning
pipeline_tag: text-generation
inference: true
language:
- en
license: apache-2.0
model_creator: yentinglin
model_name: Mistral-Small-24B-Instruct-2501-reasoning
model_type: mistral
quantized_by: brittlewis12
tags:
- reasoning
- mistral
datasets:
- open-r1/OpenR1-Math-220k
- simplescaling/s1K-1.1
- yentinglin/s1K-1.1-trl-format
---
# Mistral Small Reasoning GGUF
**Original model**: [Mistral-Small-24B-Instruct-2501-reasoning](https://huggingface.co/yentinglin/Mistral-Small-24B-Instruct-2501-reasoning)
**Model creator**: [yentinglin](https://huggingface.co/yentinglin)
> This model is a fine-tuned version of mistralai/Mistral-Small-24B-Instruct-2501, specifically optimized for mathematical reasoning tasks. It has been fine-tuned on datasets including OpenR1-Math-220k, and s1K-1.1, aiming to enhance its reasoning capabilities.
This repo contains GGUF format model files for Yen-Ting Lin’s Mistral Small Reasoning.
### What is GGUF?
GGUF is a file format for representing AI models. It is the third version of the format,
introduced by the llama.cpp team on August 21st 2023.
Converted with llama.cpp build 4735 (revision [73e2ed3](https://github.com/ggml-org/llama.cpp/commits/73e2ed3ce3492d3ed70193dd09ae8aa44779651d)),
using [autogguf-rs](https://github.com/brittlewis12/autogguf-rs).
### Prompt template: Mistral Instruct (New)
```
[SYSTEM_PROMPT]{{system_message}}[/SYSTEM_PROMPT]
[INST]{{prompt}}[/INST]
{{assistant_message}}
```
---
## Download & run with [cnvrs](https://twitter.com/cnvrsai) on iPhone, iPad, and Mac!

[cnvrs](https://testflight.apple.com/join/sFWReS7K) is the best app for private, local AI on your device:
- create & save **Characters** with custom system prompts & temperature settings
- download and experiment with any **GGUF model** you can [find on HuggingFace](https://huggingface.co/models?library=gguf)!
* or, use an API key with the chat completions-compatible model provider of your choice -- ChatGPT, Claude, Gemini, DeepSeek, & more!
- make it your own with custom **Theme colors**
- powered by Metal ⚡️ & [Llama.cpp](https://github.com/ggerganov/llama.cpp), with **haptics** during response streaming!
- **try it out** yourself today, on [Testflight](https://testflight.apple.com/join/sFWReS7K)!
* if you **already have the app**, download Mistral Small Reasoning now!
* <cnvrsai:///models/search/hf?id=brittlewis12/Mistral-Small-24B-Instruct-2501-reasoning-GGUF>
- follow [cnvrs on twitter](https://twitter.com/cnvrsai) to stay up to date
---
## Original Model Evaluation
> The evaluation code is available at [Hugging Face Open-R1](https://github.com/huggingface/open-r1). Note that I have updated the AIME 25 dataset to the full set, available at [AIME 2025](https://huggingface.co/datasets/yentinglin/aime_2025).
>
> Our results below are averaged over multiple runs. See our eval details [here.](https://huggingface.co/datasets/yentinglin/zhtw-reasoning-details-_fsx_ubuntu_yentinglin_ckpt_run_20250214_1600_checkpoint-800_)
| Pass@1 | # Params | MATH-500 | AIME 2025 | AIME 2024 | GPQA Diamond |
|-----------------------------------|---------|---------|-----------|-----------|--------------|
| **Mistral-24B-Reasoning (Ours)** | 24B | 95.0 | 53.33 | 66.67 | 62.02 |
| Mistral-24B-Instruct | 24B | 70.6 | - | - | 45.3 |
| s1.1-32B | 32B | 93.2 | 40.0 | 56.7 | 61.62 |
| LIMO | 32B | 94.8 | 36.67 | 57.1 | 59.09 |
| DeepSeek-R1-Distill-Llama-70B | 70B | 94.5 | 46.67 | 70.0 | 65.2 |
| DeepSeek-R1-Distill-Qwen-32B | 32B | 94.3 | 60.0 | 72.6 | 62.1 |
| DeepSeek-R1 | 671B | 97.3 | 70.0 | 72.6 | 71.5 |
| o1 | - | 96.4 | 79.0 | - | 75.7 |
| o3-mini (high) | - | 97.9 | 86.5 | - | 77.2 |
| o3-mini (medium) | - | 97.3 | 76.5 | - | 74.9 |
|
jeevana/group8qna_gpt2__26janV001 | jeevana | "2024-01-26T15:06:25Z" | 159 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-01-26T14:56:41Z" | ---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: group8qna_gpt2__26janV001
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# group8qna_gpt2__26janV001
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8110
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.9404 | 0.47 | 100 | 2.9545 |
| 2.8649 | 0.93 | 200 | 2.8110 |
### Framework versions
- Transformers 4.37.1
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
HitmanReborn/Wed21O1_KK10 | HitmanReborn | "2025-02-21T16:22:59Z" | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | "2025-02-21T16:16:57Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
wongctroman/fine-tuned-cloudy-sentence-transformer-9 | wongctroman | "2024-03-11T04:13:52Z" | 49 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-03-11T04:12:08Z" | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# wongctroman/fine-tuned-cloudy-sentence-transformer-9
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('wongctroman/fine-tuned-cloudy-sentence-transformer-9')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=wongctroman/fine-tuned-cloudy-sentence-transformer-9)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 18 with parameters:
```
{'batch_size': 5, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 15,
"evaluation_steps": 500,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
epiverseai/mistral-7b-r-data-science | epiverseai | "2024-04-22T02:25:56Z" | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | null | "2024-04-22T02:10:42Z" | ---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 |
krsnastuta1/model | krsnastuta1 | "2025-01-12T14:35:53Z" | 13 | 0 | null | [
"pytorch",
"llama",
"unsloth",
"trl",
"sft",
"license:apache-2.0",
"region:us"
] | null | "2025-01-12T14:31:16Z" | ---
license: apache-2.0
tags:
- unsloth
- trl
- sft
---
|
minhtien2405/Qwen2.5-VL-32B-Instruct-Golf-Scorecard | minhtien2405 | "2025-04-01T20:03:04Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-VL-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-32B-Instruct",
"endpoints_compatible",
"region:us"
] | null | "2025-04-01T06:41:30Z" | ---
base_model: Qwen/Qwen2.5-VL-32B-Instruct
library_name: transformers
model_name: Qwen2.5-VL-32B-Instruct-Golf-Scorecard
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-VL-32B-Instruct-Golf-Scorecard
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="minhtien2405/Qwen2.5-VL-32B-Instruct-Golf-Scorecard", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phamminhtien2405-vg/Qwen2.5-VL-32B-Instruct-Golf-Scorecard/runs/fjxwmu8u)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0
- Transformers: 4.50.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
gagan3012/k2t-tiny | gagan3012 | "2021-09-22T08:27:33Z" | 8 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"keytotext",
"k2t-tiny",
"Keywords to Sentences",
"en",
"dataset:WebNLG",
"dataset:Dart",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-03-02T23:29:05Z" | ---
language: en
thumbnail: Keywords to Sentences
tags:
- keytotext
- k2t-tiny
- Keywords to Sentences
license: mit
datasets:
- WebNLG
- Dart
metrics:
- NLG
---
# keytotext

Idea is to build a model which will take keywords as inputs and generate sentences as outputs.
### Keytotext is powered by Huggingface 🤗
[](https://pypi.org/project/keytotext/)
[](https://pepy.tech/project/keytotext)
[](https://colab.research.google.com/github/gagan3012/keytotext/blob/master/Examples/K2T.ipynb)
[](https://share.streamlit.io/gagan3012/keytotext/UI/app.py)
## Model:
Keytotext is based on the Amazing T5 Model:
- `k2t`: [Model](https://huggingface.co/gagan3012/k2t)
- `k2t-tiny`: [Model](https://huggingface.co/gagan3012/k2t-tiny)
- `k2t-base`: [Model](https://huggingface.co/gagan3012/k2t-base)
Training Notebooks can be found in the [`Training Notebooks`](https://github.com/gagan3012/keytotext/tree/master/Training%20Notebooks) Folder
## Usage:
Example usage: [](https://colab.research.google.com/github/gagan3012/keytotext/blob/master/Examples/K2T.ipynb)
Example Notebooks can be found in the [`Notebooks`](https://github.com/gagan3012/keytotext/tree/master/Examples) Folder
```
pip install keytotext
```

## UI:
UI: [](https://share.streamlit.io/gagan3012/keytotext/UI/app.py)
```
pip install streamlit-tags
```
This uses a custom streamlit component built by me: [GitHub](https://github.com/gagan3012/streamlit-tags)

|
SEVUNX/JOY_DIFFUSION | SEVUNX | "2023-06-08T03:46:53Z" | 0 | 0 | null | [
"art",
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2023-02-28T13:37:36Z" | ---
license: creativeml-openrail-m
pipeline_tag: text-to-image
tags:
- art
- stable-diffusion
---
<center>
<b><i><font size="6"><p style="color:red">JOY DIFFUSION CHECKPOINT MERGE</p></font></i></b>
<img src="https://64.media.tumblr.com/3c2c6f40b41877ef923150a52705a14a/tumblr_mlnzf9BvWN1qg6rkio1_500.gifv" alt="">
</center> |
kartikgupta373/e5-ad15569-705531-brown | kartikgupta373 | "2025-01-29T08:31:35Z" | 7 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-01-29T08:31:33Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# E5 Ad15569 705531 Brown
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('kartikgupta373/e5-ad15569-705531-brown', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
qa26/test_sentiment_v4 | qa26 | "2025-03-24T08:44:58Z" | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | "2025-03-24T08:43:49Z" | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
yesj1234/enko_mbartLarge_36p_tokenize_run1 | yesj1234 | "2023-11-03T01:04:00Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"generated_from_trainer",
"en",
"ko",
"base_model:facebook/mbart-large-50-many-to-many-mmt",
"base_model:finetune:facebook/mbart-large-50-many-to-many-mmt",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-11-03T01:00:10Z" | ---
language:
- en
- ko
base_model: facebook/mbart-large-50-many-to-many-mmt
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: enko_mbartLarge_36p_tokenize_run1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# enko_mbartLarge_36p_tokenize_run1
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1249
- Bleu: 38.8566
- Gen Len: 16.4716
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 1.3157 | 0.46 | 5000 | 1.2895 | 34.4176 | 16.4931 |
| 1.2575 | 0.93 | 10000 | 1.2279 | 35.0029 | 16.8009 |
| 1.1578 | 1.39 | 15000 | 1.1733 | 36.9282 | 16.5838 |
| 1.0885 | 1.86 | 20000 | 1.1464 | 37.6913 | 16.6649 |
| 1.0451 | 2.32 | 25000 | 1.1437 | 37.7875 | 16.5188 |
| 1.0465 | 2.79 | 30000 | 1.1425 | 37.895 | 16.4987 |
| 1.0156 | 3.25 | 35000 | 1.1464 | 37.8434 | 16.5515 |
| 0.9893 | 3.72 | 40000 | 1.1544 | 37.358 | 16.6096 |
| 0.8779 | 4.18 | 45000 | 1.1419 | 38.1772 | 16.457 |
| 0.8565 | 4.65 | 50000 | 1.1249 | 38.8455 | 16.4749 |
| 0.7293 | 5.11 | 55000 | 1.1566 | 38.4853 | 16.3462 |
| 0.7294 | 5.57 | 60000 | 1.1824 | 37.8822 | 16.3295 |
| 0.7254 | 6.04 | 65000 | 1.2153 | 37.3612 | 16.381 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
anrhi/mobile_v2__fake_image_M_detection | anrhi | "2024-04-18T11:14:19Z" | 0 | 0 | keras | [
"keras",
"tf-keras",
"region:us"
] | null | "2024-04-18T11:13:46Z" | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> |
isspek/roberta-base_monkeypox_llama_5_2e-5_16_undersampling_0.4 | isspek | "2025-03-23T13:32:38Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-11-26T14:28:23Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Nerva1228/zhuiguang1 | Nerva1228 | "2025-04-11T01:18:03Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-04-11T01:18:01Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: zhuiguang1
---
# Zhuiguang1
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `zhuiguang1` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "zhuiguang1",
"lora_weights": "https://huggingface.co/Nerva1228/zhuiguang1/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Nerva1228/zhuiguang1', weight_name='lora.safetensors')
image = pipeline('zhuiguang1').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Nerva1228/zhuiguang1/discussions) to add images that show off what you’ve made with this LoRA.
|
anton96vice/av-tg-phi3-new | anton96vice | "2024-05-07T06:51:24Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-05-07T06:51:23Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MayBashendy/ArabicNewSplits7_OSS_usingWellWrittenEssays_FineTuningAraBERT_run1_AugV5_k19_task2_organization | MayBashendy | "2025-01-16T05:17:44Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-16T05:09:54Z" | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_OSS_usingWellWrittenEssays_FineTuningAraBERT_run1_AugV5_k19_task2_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_OSS_usingWellWrittenEssays_FineTuningAraBERT_run1_AugV5_k19_task2_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8590
- Qwk: 0.4278
- Mse: 0.8590
- Rmse: 0.9268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0187 | 2 | 4.1324 | 0.0018 | 4.1324 | 2.0328 |
| No log | 0.0374 | 4 | 2.9786 | -0.0193 | 2.9786 | 1.7259 |
| No log | 0.0561 | 6 | 1.8373 | 0.1273 | 1.8373 | 1.3555 |
| No log | 0.0748 | 8 | 1.3982 | 0.0082 | 1.3982 | 1.1825 |
| No log | 0.0935 | 10 | 1.2240 | 0.1140 | 1.2240 | 1.1063 |
| No log | 0.1121 | 12 | 1.1479 | 0.1857 | 1.1479 | 1.0714 |
| No log | 0.1308 | 14 | 1.1280 | 0.1857 | 1.1280 | 1.0621 |
| No log | 0.1495 | 16 | 1.1043 | 0.1875 | 1.1043 | 1.0508 |
| No log | 0.1682 | 18 | 1.0671 | 0.2300 | 1.0671 | 1.0330 |
| No log | 0.1869 | 20 | 1.2814 | 0.2065 | 1.2814 | 1.1320 |
| No log | 0.2056 | 22 | 1.2555 | 0.2015 | 1.2555 | 1.1205 |
| No log | 0.2243 | 24 | 1.0915 | 0.2697 | 1.0915 | 1.0448 |
| No log | 0.2430 | 26 | 1.1210 | 0.2115 | 1.1210 | 1.0588 |
| No log | 0.2617 | 28 | 1.0726 | 0.3090 | 1.0726 | 1.0357 |
| No log | 0.2804 | 30 | 1.1005 | 0.2454 | 1.1005 | 1.0491 |
| No log | 0.2991 | 32 | 1.4587 | 0.2635 | 1.4587 | 1.2078 |
| No log | 0.3178 | 34 | 1.6666 | 0.2519 | 1.6666 | 1.2910 |
| No log | 0.3364 | 36 | 1.1943 | 0.2344 | 1.1943 | 1.0928 |
| No log | 0.3551 | 38 | 1.1555 | 0.3671 | 1.1555 | 1.0750 |
| No log | 0.3738 | 40 | 1.2732 | 0.3496 | 1.2732 | 1.1284 |
| No log | 0.3925 | 42 | 1.0942 | 0.3590 | 1.0942 | 1.0460 |
| No log | 0.4112 | 44 | 1.1496 | 0.2398 | 1.1496 | 1.0722 |
| No log | 0.4299 | 46 | 1.2297 | 0.2270 | 1.2297 | 1.1089 |
| No log | 0.4486 | 48 | 1.0041 | 0.5559 | 1.0041 | 1.0020 |
| No log | 0.4673 | 50 | 1.0135 | 0.4541 | 1.0135 | 1.0067 |
| No log | 0.4860 | 52 | 1.0003 | 0.5356 | 1.0003 | 1.0001 |
| No log | 0.5047 | 54 | 1.0223 | 0.5559 | 1.0223 | 1.0111 |
| No log | 0.5234 | 56 | 1.0017 | 0.5298 | 1.0017 | 1.0009 |
| No log | 0.5421 | 58 | 1.0844 | 0.4670 | 1.0844 | 1.0413 |
| No log | 0.5607 | 60 | 1.1740 | 0.5585 | 1.1740 | 1.0835 |
| No log | 0.5794 | 62 | 1.1596 | 0.5072 | 1.1596 | 1.0768 |
| No log | 0.5981 | 64 | 1.0692 | 0.5 | 1.0692 | 1.0340 |
| No log | 0.6168 | 66 | 1.0940 | 0.4974 | 1.0940 | 1.0459 |
| No log | 0.6355 | 68 | 0.9992 | 0.4716 | 0.9992 | 0.9996 |
| No log | 0.6542 | 70 | 0.9613 | 0.4421 | 0.9613 | 0.9805 |
| No log | 0.6729 | 72 | 0.9095 | 0.4061 | 0.9095 | 0.9537 |
| No log | 0.6916 | 74 | 0.9471 | 0.3947 | 0.9471 | 0.9732 |
| No log | 0.7103 | 76 | 0.9104 | 0.4648 | 0.9104 | 0.9542 |
| No log | 0.7290 | 78 | 0.8937 | 0.4510 | 0.8937 | 0.9454 |
| No log | 0.7477 | 80 | 0.8701 | 0.5702 | 0.8701 | 0.9328 |
| No log | 0.7664 | 82 | 0.9481 | 0.5179 | 0.9481 | 0.9737 |
| No log | 0.7850 | 84 | 1.0435 | 0.5458 | 1.0435 | 1.0215 |
| No log | 0.8037 | 86 | 0.9743 | 0.4848 | 0.9743 | 0.9870 |
| No log | 0.8224 | 88 | 1.1064 | 0.5514 | 1.1064 | 1.0519 |
| No log | 0.8411 | 90 | 1.0641 | 0.5491 | 1.0641 | 1.0316 |
| No log | 0.8598 | 92 | 1.0041 | 0.4996 | 1.0041 | 1.0020 |
| No log | 0.8785 | 94 | 1.1609 | 0.4954 | 1.1609 | 1.0775 |
| No log | 0.8972 | 96 | 1.2288 | 0.4728 | 1.2288 | 1.1085 |
| No log | 0.9159 | 98 | 1.0448 | 0.4760 | 1.0448 | 1.0222 |
| No log | 0.9346 | 100 | 0.9374 | 0.5182 | 0.9374 | 0.9682 |
| No log | 0.9533 | 102 | 1.1021 | 0.4424 | 1.1021 | 1.0498 |
| No log | 0.9720 | 104 | 1.1129 | 0.4432 | 1.1129 | 1.0549 |
| No log | 0.9907 | 106 | 0.8829 | 0.4817 | 0.8829 | 0.9396 |
| No log | 1.0093 | 108 | 0.9592 | 0.5402 | 0.9592 | 0.9794 |
| No log | 1.0280 | 110 | 1.0835 | 0.5308 | 1.0835 | 1.0409 |
| No log | 1.0467 | 112 | 0.9734 | 0.5158 | 0.9734 | 0.9866 |
| No log | 1.0654 | 114 | 0.8974 | 0.5354 | 0.8974 | 0.9473 |
| No log | 1.0841 | 116 | 0.8910 | 0.5432 | 0.8910 | 0.9439 |
| No log | 1.1028 | 118 | 0.9335 | 0.5362 | 0.9335 | 0.9662 |
| No log | 1.1215 | 120 | 1.1412 | 0.4658 | 1.1412 | 1.0683 |
| No log | 1.1402 | 122 | 1.2625 | 0.3633 | 1.2625 | 1.1236 |
| No log | 1.1589 | 124 | 1.1311 | 0.4380 | 1.1311 | 1.0635 |
| No log | 1.1776 | 126 | 0.8989 | 0.4823 | 0.8989 | 0.9481 |
| No log | 1.1963 | 128 | 1.0228 | 0.4191 | 1.0228 | 1.0113 |
| No log | 1.2150 | 130 | 1.1108 | 0.3784 | 1.1108 | 1.0539 |
| No log | 1.2336 | 132 | 0.9623 | 0.4638 | 0.9623 | 0.9810 |
| No log | 1.2523 | 134 | 0.8378 | 0.5729 | 0.8378 | 0.9153 |
| No log | 1.2710 | 136 | 0.8683 | 0.4503 | 0.8683 | 0.9319 |
| No log | 1.2897 | 138 | 1.0159 | 0.4665 | 1.0159 | 1.0079 |
| No log | 1.3084 | 140 | 1.0691 | 0.5002 | 1.0691 | 1.0340 |
| No log | 1.3271 | 142 | 0.9907 | 0.5130 | 0.9907 | 0.9953 |
| No log | 1.3458 | 144 | 0.9913 | 0.5299 | 0.9913 | 0.9956 |
| No log | 1.3645 | 146 | 0.9389 | 0.5455 | 0.9389 | 0.9690 |
| No log | 1.3832 | 148 | 0.8802 | 0.5303 | 0.8802 | 0.9382 |
| No log | 1.4019 | 150 | 0.8714 | 0.5140 | 0.8714 | 0.9335 |
| No log | 1.4206 | 152 | 0.9040 | 0.5126 | 0.9040 | 0.9508 |
| No log | 1.4393 | 154 | 0.9245 | 0.5286 | 0.9245 | 0.9615 |
| No log | 1.4579 | 156 | 0.9449 | 0.5286 | 0.9449 | 0.9720 |
| No log | 1.4766 | 158 | 0.8711 | 0.4837 | 0.8711 | 0.9333 |
| No log | 1.4953 | 160 | 0.8741 | 0.4489 | 0.8741 | 0.9349 |
| No log | 1.5140 | 162 | 0.8575 | 0.4195 | 0.8575 | 0.9260 |
| No log | 1.5327 | 164 | 0.9523 | 0.5431 | 0.9523 | 0.9758 |
| No log | 1.5514 | 166 | 1.0663 | 0.4645 | 1.0663 | 1.0326 |
| No log | 1.5701 | 168 | 0.9782 | 0.4867 | 0.9782 | 0.9890 |
| No log | 1.5888 | 170 | 0.8558 | 0.5498 | 0.8558 | 0.9251 |
| No log | 1.6075 | 172 | 0.8299 | 0.4910 | 0.8299 | 0.9110 |
| No log | 1.6262 | 174 | 0.8694 | 0.4388 | 0.8694 | 0.9324 |
| No log | 1.6449 | 176 | 0.9222 | 0.3924 | 0.9222 | 0.9603 |
| No log | 1.6636 | 178 | 0.9191 | 0.3945 | 0.9191 | 0.9587 |
| No log | 1.6822 | 180 | 1.0713 | 0.5474 | 1.0713 | 1.0350 |
| No log | 1.7009 | 182 | 1.1053 | 0.5493 | 1.1053 | 1.0514 |
| No log | 1.7196 | 184 | 0.9597 | 0.5091 | 0.9597 | 0.9796 |
| No log | 1.7383 | 186 | 0.9027 | 0.4668 | 0.9027 | 0.9501 |
| No log | 1.7570 | 188 | 0.8962 | 0.4764 | 0.8962 | 0.9467 |
| No log | 1.7757 | 190 | 0.8520 | 0.4859 | 0.8520 | 0.9230 |
| No log | 1.7944 | 192 | 0.8374 | 0.4069 | 0.8374 | 0.9151 |
| No log | 1.8131 | 194 | 0.8637 | 0.4998 | 0.8637 | 0.9294 |
| No log | 1.8318 | 196 | 1.0575 | 0.5105 | 1.0575 | 1.0284 |
| No log | 1.8505 | 198 | 1.1165 | 0.4810 | 1.1165 | 1.0567 |
| No log | 1.8692 | 200 | 0.9475 | 0.4921 | 0.9475 | 0.9734 |
| No log | 1.8879 | 202 | 0.8583 | 0.4335 | 0.8583 | 0.9264 |
| No log | 1.9065 | 204 | 0.8680 | 0.4728 | 0.8680 | 0.9317 |
| No log | 1.9252 | 206 | 0.9059 | 0.5515 | 0.9059 | 0.9518 |
| No log | 1.9439 | 208 | 0.9264 | 0.4685 | 0.9264 | 0.9625 |
| No log | 1.9626 | 210 | 1.0017 | 0.4935 | 1.0017 | 1.0009 |
| No log | 1.9813 | 212 | 0.9736 | 0.4935 | 0.9736 | 0.9867 |
| No log | 2.0 | 214 | 0.9167 | 0.4782 | 0.9167 | 0.9575 |
| No log | 2.0187 | 216 | 0.8843 | 0.5102 | 0.8843 | 0.9404 |
| No log | 2.0374 | 218 | 0.8841 | 0.4926 | 0.8841 | 0.9402 |
| No log | 2.0561 | 220 | 0.9079 | 0.4874 | 0.9079 | 0.9528 |
| No log | 2.0748 | 222 | 0.9825 | 0.5256 | 0.9825 | 0.9912 |
| No log | 2.0935 | 224 | 0.9588 | 0.4773 | 0.9588 | 0.9792 |
| No log | 2.1121 | 226 | 0.9238 | 0.3379 | 0.9238 | 0.9611 |
| No log | 2.1308 | 228 | 0.9257 | 0.3130 | 0.9257 | 0.9621 |
| No log | 2.1495 | 230 | 0.9364 | 0.4509 | 0.9364 | 0.9677 |
| No log | 2.1682 | 232 | 1.0019 | 0.5054 | 1.0019 | 1.0009 |
| No log | 2.1869 | 234 | 0.9995 | 0.5054 | 0.9995 | 0.9998 |
| No log | 2.2056 | 236 | 0.9068 | 0.4763 | 0.9068 | 0.9523 |
| No log | 2.2243 | 238 | 0.8679 | 0.4859 | 0.8679 | 0.9316 |
| No log | 2.2430 | 240 | 0.8734 | 0.4835 | 0.8734 | 0.9345 |
| No log | 2.2617 | 242 | 0.9639 | 0.5249 | 0.9639 | 0.9818 |
| No log | 2.2804 | 244 | 1.1116 | 0.4736 | 1.1116 | 1.0543 |
| No log | 2.2991 | 246 | 1.1228 | 0.4979 | 1.1228 | 1.0596 |
| No log | 2.3178 | 248 | 1.0296 | 0.5217 | 1.0296 | 1.0147 |
| No log | 2.3364 | 250 | 0.9507 | 0.5420 | 0.9507 | 0.9751 |
| No log | 2.3551 | 252 | 0.9444 | 0.5263 | 0.9444 | 0.9718 |
| No log | 2.3738 | 254 | 0.8953 | 0.4553 | 0.8953 | 0.9462 |
| No log | 2.3925 | 256 | 0.8565 | 0.4779 | 0.8565 | 0.9255 |
| No log | 2.4112 | 258 | 0.8548 | 0.4656 | 0.8548 | 0.9246 |
| No log | 2.4299 | 260 | 0.8884 | 0.4870 | 0.8884 | 0.9426 |
| No log | 2.4486 | 262 | 0.9674 | 0.5578 | 0.9674 | 0.9835 |
| No log | 2.4673 | 264 | 0.9588 | 0.5430 | 0.9588 | 0.9792 |
| No log | 2.4860 | 266 | 0.9492 | 0.5308 | 0.9492 | 0.9743 |
| No log | 2.5047 | 268 | 0.9429 | 0.5308 | 0.9429 | 0.9711 |
| No log | 2.5234 | 270 | 0.8562 | 0.5121 | 0.8562 | 0.9253 |
| No log | 2.5421 | 272 | 0.8283 | 0.5841 | 0.8283 | 0.9101 |
| No log | 2.5607 | 274 | 0.7993 | 0.5884 | 0.7993 | 0.8940 |
| No log | 2.5794 | 276 | 0.8100 | 0.6009 | 0.8100 | 0.9000 |
| No log | 2.5981 | 278 | 0.8273 | 0.5575 | 0.8273 | 0.9096 |
| No log | 2.6168 | 280 | 0.8330 | 0.4920 | 0.8330 | 0.9127 |
| No log | 2.6355 | 282 | 0.8664 | 0.4604 | 0.8664 | 0.9308 |
| No log | 2.6542 | 284 | 0.9391 | 0.4987 | 0.9391 | 0.9691 |
| No log | 2.6729 | 286 | 0.9678 | 0.5493 | 0.9678 | 0.9838 |
| No log | 2.6916 | 288 | 0.8829 | 0.5210 | 0.8829 | 0.9396 |
| No log | 2.7103 | 290 | 0.8308 | 0.5356 | 0.8308 | 0.9115 |
| No log | 2.7290 | 292 | 0.8150 | 0.5010 | 0.8150 | 0.9028 |
| No log | 2.7477 | 294 | 0.8066 | 0.4499 | 0.8066 | 0.8981 |
| No log | 2.7664 | 296 | 0.8094 | 0.4616 | 0.8094 | 0.8997 |
| No log | 2.7850 | 298 | 0.8116 | 0.5223 | 0.8116 | 0.9009 |
| No log | 2.8037 | 300 | 0.8089 | 0.5316 | 0.8089 | 0.8994 |
| No log | 2.8224 | 302 | 0.8078 | 0.5304 | 0.8078 | 0.8988 |
| No log | 2.8411 | 304 | 0.7935 | 0.5276 | 0.7935 | 0.8908 |
| No log | 2.8598 | 306 | 0.7817 | 0.4772 | 0.7817 | 0.8841 |
| No log | 2.8785 | 308 | 0.7748 | 0.4902 | 0.7748 | 0.8802 |
| No log | 2.8972 | 310 | 0.7786 | 0.5802 | 0.7786 | 0.8824 |
| No log | 2.9159 | 312 | 0.8149 | 0.5601 | 0.8149 | 0.9027 |
| No log | 2.9346 | 314 | 0.9237 | 0.5627 | 0.9237 | 0.9611 |
| No log | 2.9533 | 316 | 0.9322 | 0.5280 | 0.9322 | 0.9655 |
| No log | 2.9720 | 318 | 0.8562 | 0.4593 | 0.8562 | 0.9253 |
| No log | 2.9907 | 320 | 0.8208 | 0.4108 | 0.8208 | 0.9060 |
| No log | 3.0093 | 322 | 0.7929 | 0.4841 | 0.7929 | 0.8905 |
| No log | 3.0280 | 324 | 0.7928 | 0.5197 | 0.7928 | 0.8904 |
| No log | 3.0467 | 326 | 0.8045 | 0.5362 | 0.8045 | 0.8969 |
| No log | 3.0654 | 328 | 0.9661 | 0.5794 | 0.9661 | 0.9829 |
| No log | 3.0841 | 330 | 1.4085 | 0.5311 | 1.4085 | 1.1868 |
| No log | 3.1028 | 332 | 1.5909 | 0.4531 | 1.5909 | 1.2613 |
| No log | 3.1215 | 334 | 1.4706 | 0.4197 | 1.4706 | 1.2127 |
| No log | 3.1402 | 336 | 1.2009 | 0.4968 | 1.2009 | 1.0958 |
| No log | 3.1589 | 338 | 1.0140 | 0.5605 | 1.0140 | 1.0070 |
| No log | 3.1776 | 340 | 0.9072 | 0.5261 | 0.9072 | 0.9525 |
| No log | 3.1963 | 342 | 0.8510 | 0.5264 | 0.8510 | 0.9225 |
| No log | 3.2150 | 344 | 0.8395 | 0.4923 | 0.8395 | 0.9163 |
| No log | 3.2336 | 346 | 0.8485 | 0.5235 | 0.8485 | 0.9212 |
| No log | 3.2523 | 348 | 0.8078 | 0.5038 | 0.8078 | 0.8988 |
| No log | 3.2710 | 350 | 0.7962 | 0.5152 | 0.7962 | 0.8923 |
| No log | 3.2897 | 352 | 0.8123 | 0.5416 | 0.8123 | 0.9013 |
| No log | 3.3084 | 354 | 0.8122 | 0.5114 | 0.8122 | 0.9012 |
| No log | 3.3271 | 356 | 0.8500 | 0.4321 | 0.8500 | 0.9220 |
| No log | 3.3458 | 358 | 0.8884 | 0.4375 | 0.8884 | 0.9425 |
| No log | 3.3645 | 360 | 0.8882 | 0.4608 | 0.8882 | 0.9425 |
| No log | 3.3832 | 362 | 0.9043 | 0.4795 | 0.9043 | 0.9510 |
| No log | 3.4019 | 364 | 0.9514 | 0.5260 | 0.9514 | 0.9754 |
| No log | 3.4206 | 366 | 0.9134 | 0.4938 | 0.9134 | 0.9557 |
| No log | 3.4393 | 368 | 0.9214 | 0.5054 | 0.9214 | 0.9599 |
| No log | 3.4579 | 370 | 0.8789 | 0.5365 | 0.8789 | 0.9375 |
| No log | 3.4766 | 372 | 0.8645 | 0.5028 | 0.8645 | 0.9298 |
| No log | 3.4953 | 374 | 0.9029 | 0.5395 | 0.9029 | 0.9502 |
| No log | 3.5140 | 376 | 0.9215 | 0.5106 | 0.9215 | 0.9599 |
| No log | 3.5327 | 378 | 0.8970 | 0.4614 | 0.8970 | 0.9471 |
| No log | 3.5514 | 380 | 0.8577 | 0.4787 | 0.8577 | 0.9261 |
| No log | 3.5701 | 382 | 0.8620 | 0.5201 | 0.8620 | 0.9284 |
| No log | 3.5888 | 384 | 0.9246 | 0.4834 | 0.9246 | 0.9616 |
| No log | 3.6075 | 386 | 0.9870 | 0.4824 | 0.9870 | 0.9935 |
| No log | 3.6262 | 388 | 0.9216 | 0.4834 | 0.9216 | 0.9600 |
| No log | 3.6449 | 390 | 0.8242 | 0.4363 | 0.8242 | 0.9078 |
| No log | 3.6636 | 392 | 0.8417 | 0.4560 | 0.8417 | 0.9174 |
| No log | 3.6822 | 394 | 0.9691 | 0.4276 | 0.9691 | 0.9844 |
| No log | 3.7009 | 396 | 1.0400 | 0.4355 | 1.0400 | 1.0198 |
| No log | 3.7196 | 398 | 1.0286 | 0.4152 | 1.0286 | 1.0142 |
| No log | 3.7383 | 400 | 0.9769 | 0.4309 | 0.9769 | 0.9884 |
| No log | 3.7570 | 402 | 0.9145 | 0.4166 | 0.9145 | 0.9563 |
| No log | 3.7757 | 404 | 0.8425 | 0.4369 | 0.8425 | 0.9179 |
| No log | 3.7944 | 406 | 0.8528 | 0.4237 | 0.8528 | 0.9235 |
| No log | 3.8131 | 408 | 0.9147 | 0.4545 | 0.9147 | 0.9564 |
| No log | 3.8318 | 410 | 0.9491 | 0.4533 | 0.9491 | 0.9742 |
| No log | 3.8505 | 412 | 0.9815 | 0.4533 | 0.9815 | 0.9907 |
| No log | 3.8692 | 414 | 0.8878 | 0.5028 | 0.8878 | 0.9422 |
| No log | 3.8879 | 416 | 0.8775 | 0.5261 | 0.8775 | 0.9368 |
| No log | 3.9065 | 418 | 0.8421 | 0.4711 | 0.8421 | 0.9176 |
| No log | 3.9252 | 420 | 0.8523 | 0.4720 | 0.8523 | 0.9232 |
| No log | 3.9439 | 422 | 0.8965 | 0.4898 | 0.8965 | 0.9468 |
| No log | 3.9626 | 424 | 0.9273 | 0.4811 | 0.9273 | 0.9629 |
| No log | 3.9813 | 426 | 0.9694 | 0.4167 | 0.9694 | 0.9846 |
| No log | 4.0 | 428 | 0.9411 | 0.3065 | 0.9411 | 0.9701 |
| No log | 4.0187 | 430 | 0.9831 | 0.4339 | 0.9831 | 0.9915 |
| No log | 4.0374 | 432 | 0.9968 | 0.4337 | 0.9968 | 0.9984 |
| No log | 4.0561 | 434 | 1.0225 | 0.4521 | 1.0225 | 1.0112 |
| No log | 4.0748 | 436 | 1.1246 | 0.5206 | 1.1246 | 1.0605 |
| No log | 4.0935 | 438 | 1.0400 | 0.4697 | 1.0400 | 1.0198 |
| No log | 4.1121 | 440 | 0.9246 | 0.4252 | 0.9246 | 0.9616 |
| No log | 4.1308 | 442 | 0.9210 | 0.4453 | 0.9210 | 0.9597 |
| No log | 4.1495 | 444 | 0.9545 | 0.4527 | 0.9545 | 0.9770 |
| No log | 4.1682 | 446 | 0.9575 | 0.4533 | 0.9575 | 0.9785 |
| No log | 4.1869 | 448 | 0.9253 | 0.4539 | 0.9253 | 0.9619 |
| No log | 4.2056 | 450 | 0.8661 | 0.4940 | 0.8661 | 0.9307 |
| No log | 4.2243 | 452 | 0.8290 | 0.4637 | 0.8290 | 0.9105 |
| No log | 4.2430 | 454 | 0.8380 | 0.4808 | 0.8380 | 0.9154 |
| No log | 4.2617 | 456 | 0.9125 | 0.4166 | 0.9125 | 0.9553 |
| No log | 4.2804 | 458 | 0.9688 | 0.3556 | 0.9688 | 0.9843 |
| No log | 4.2991 | 460 | 0.9398 | 0.3465 | 0.9398 | 0.9694 |
| No log | 4.3178 | 462 | 0.9405 | 0.3734 | 0.9405 | 0.9698 |
| No log | 4.3364 | 464 | 0.9618 | 0.4082 | 0.9618 | 0.9807 |
| No log | 4.3551 | 466 | 0.9951 | 0.4083 | 0.9951 | 0.9976 |
| No log | 4.3738 | 468 | 1.0250 | 0.4359 | 1.0250 | 1.0124 |
| No log | 4.3925 | 470 | 0.9957 | 0.4770 | 0.9957 | 0.9979 |
| No log | 4.4112 | 472 | 0.9142 | 0.4623 | 0.9142 | 0.9561 |
| No log | 4.4299 | 474 | 0.8493 | 0.4996 | 0.8493 | 0.9216 |
| No log | 4.4486 | 476 | 0.8341 | 0.5164 | 0.8341 | 0.9133 |
| No log | 4.4673 | 478 | 0.8401 | 0.5164 | 0.8401 | 0.9166 |
| No log | 4.4860 | 480 | 0.8693 | 0.4237 | 0.8693 | 0.9324 |
| No log | 4.5047 | 482 | 0.9137 | 0.4607 | 0.9137 | 0.9559 |
| No log | 4.5234 | 484 | 0.9471 | 0.3590 | 0.9471 | 0.9732 |
| No log | 4.5421 | 486 | 0.9381 | 0.3543 | 0.9381 | 0.9685 |
| No log | 4.5607 | 488 | 0.9303 | 0.3656 | 0.9303 | 0.9645 |
| No log | 4.5794 | 490 | 0.9380 | 0.3897 | 0.9380 | 0.9685 |
| No log | 4.5981 | 492 | 0.9701 | 0.4545 | 0.9701 | 0.9850 |
| No log | 4.6168 | 494 | 0.9737 | 0.4741 | 0.9737 | 0.9867 |
| No log | 4.6355 | 496 | 0.9299 | 0.4946 | 0.9299 | 0.9643 |
| No log | 4.6542 | 498 | 0.8582 | 0.5077 | 0.8582 | 0.9264 |
| 0.337 | 4.6729 | 500 | 0.8413 | 0.5172 | 0.8413 | 0.9172 |
| 0.337 | 4.6916 | 502 | 0.8460 | 0.5106 | 0.8460 | 0.9198 |
| 0.337 | 4.7103 | 504 | 0.8423 | 0.4592 | 0.8423 | 0.9178 |
| 0.337 | 4.7290 | 506 | 0.8415 | 0.4548 | 0.8415 | 0.9173 |
| 0.337 | 4.7477 | 508 | 0.8534 | 0.4804 | 0.8534 | 0.9238 |
| 0.337 | 4.7664 | 510 | 0.8590 | 0.4278 | 0.8590 | 0.9268 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
IlluminatiPudding/a2c-PandaPickAndPlaceDense-v3_v20 | IlluminatiPudding | "2023-11-21T11:22:56Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaPickAndPlaceDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-11-21T11:17:14Z" | ---
library_name: stable-baselines3
tags:
- PandaPickAndPlaceDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaPickAndPlaceDense-v3
type: PandaPickAndPlaceDense-v3
metrics:
- type: mean_reward
value: -50.00 +/- 0.00
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaPickAndPlaceDense-v3**
This is a trained model of a **A2C** agent playing **PandaPickAndPlaceDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Yuki20/capstone-llama7B-lora | Yuki20 | "2024-05-03T16:03:24Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:baffo32/decapoda-research-llama-7B-hf",
"base_model:adapter:baffo32/decapoda-research-llama-7B-hf",
"region:us"
] | null | "2024-05-03T02:31:18Z" | ---
library_name: peft
base_model: baffo32/decapoda-research-llama-7B-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.1.dev0 |
mav23/TableLLM-13b-GGUF | mav23 | "2024-10-30T15:04:51Z" | 112 | 0 | null | [
"gguf",
"Table",
"QA",
"Code",
"en",
"dataset:RUCKBReasoning/TableLLM-SFT",
"arxiv:2403.19318",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | "2024-10-30T13:29:13Z" | ---
license: llama2
datasets:
- RUCKBReasoning/TableLLM-SFT
language:
- en
tags:
- Table
- QA
- Code
---
# TableLLM: Enabling Tabular Data Manipulation by LLMs in Real Office Usage Scenarios
| **[Paper](https://arxiv.org/abs/2403.19318)** | **[Training set](https://huggingface.co/datasets/RUCKBReasoning/TableLLM-SFT)** | **[Github](https://github.com/RUCKBReasoning/TableLLM)** | **[Homepage](https://tablellm.github.io/)** |
We present **TableLLM**, a powerful large language model designed to handle tabular data manipulation tasks efficiently, whether they are embedded in spreadsheets or documents, meeting the demands of real office scenarios. The TableLLM series encompasses two distinct scales: [TableLLM-7B](https://huggingface.co/RUCKBReasoning/TableLLM-7b) and [TableLLM-13B](https://huggingface.co/RUCKBReasoning/TableLLM-13b), which are fine-tuned based on [CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) and [CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf).
TableLLM generates either a code solution or a direct text answer to handle tabular data manipulation tasks based on different scenarios. Code generation is used for handling spreadsheet-embedded tabular data, which often involves the insert, delete, update, query, merge, and plot operations of tables. Text generation is used for handling document-embedded tabular data, which often involves the query operation of short tables.
## Evaluation Results
We evaluate the code solution generation ability of TableLLM on three benchmarks: WikiSQL, Spider and Self-created table operation benchmark. The text answer generation ability is tested on four benchmarks: WikiTableQuestion (WikiTQ), TAT-QA, FeTaQA and OTTQA. The evaluation result is shown below:
| Model | WikiTQ | TAT-QA | FeTaQA | OTTQA | WikiSQL | Spider | Self-created | Average |
| :------------------- | :----: | :----: | :----: | :-----: | :-----: | :----: | :----------: | :-----: |
| TaPEX | 38.5 | – | – | – | 83.9 | 15.0 | / | 45.8 |
| TaPas | 31.5 | – | – | – | 74.2 | 23.1 | / | 42.92 |
| TableLlama | 24.0 | 22.2 | 20.5 | 6.4 | 43.7 | 9.0 | / | 20.7 |
| GPT3.5 | 58.5 |<ins>72.1</ins>| 71.2 | 60.8 | 81.7 | 67.4 | 77.1 | 69.8 |
| GPT4 |**74.1**|**77.1**|**78.4**|**69.5** | 84.0 | 69.5 | 77.8 | **75.8**|
| Llama2-Chat (13B) | 48.8 | 49.6 | 67.7 | 61.5 | – | – | – | 56.9 |
| CodeLlama (13B) | 43.4 | 47.2 | 57.2 | 49.7 | 38.3 | 21.9 | 47.6 | 43.6 |
| Deepseek-Coder (33B) | 6.5 | 11.0 | 7.1 | 7.4 | 72.5 | 58.4 | 73.9 | 33.8 |
| StructGPT (GPT3.5) | 52.5 | 27.5 | 11.8 | 14.0 | 67.8 |**84.8**| / | 48.9 |
| Binder (GPT3.5) | 61.6 | 12.8 | 6.8 | 5.1 | 78.6 | 52.6 | / | 42.5 |
| DATER (GPT3.5) | 53.4 | 28.4 | 18.3 | 13.0 | 58.2 | 26.5 | / | 37.0 |
| TableLLM-7B (Ours) | 58.8 | 66.9 | 72.6 |<ins>63.1</ins>|<ins>86.6</ins>| 82.6 |<ins>78.8</ins>| 72.8 |
| TableLLM-13B (Ours) |<ins>62.4</ins>| 68.2 |<ins>74.5</ins>| 62.5 | **90.7**|<ins>83.4</ins>| **80.8** |<ins>74.7</ins>|
## Prompt Template
The prompts we used for generating code solutions and text answers are introduced below.
### Code Solution
The prompt template for the insert, delete, update, query, and plot operations on a single table.
```
[INST]Below are the first few lines of a CSV file. You need to write a Python program to solve the provided question.
Header and first few lines of CSV file:
{csv_data}
Question: {question}[/INST]
```
The prompt template for the merge operation on two tables.
```
[INST]Below are the first few lines two CSV file. You need to write a Python program to solve the provided question.
Header and first few lines of CSV file 1:
{csv_data1}
Header and first few lines of CSV file 2:
{csv_data2}
Question: {question}[/INST]
```
The csv_data field is filled with the first few lines of your provided table file. Below is an example:
```
Sex,Length,Diameter,Height,Whole weight,Shucked weight,Viscera weight,Shell weight,Rings
M,0.455,0.365,0.095,0.514,0.2245,0.101,0.15,15
M,0.35,0.265,0.09,0.2255,0.0995,0.0485,0.07,7
F,0.53,0.42,0.135,0.677,0.2565,0.1415,0.21,9
M,0.44,0.365,0.125,0.516,0.2155,0.114,0.155,10
I,0.33,0.255,0.08,0.205,0.0895,0.0395,0.055,7
```
### Text Answer
The prompt template for direct text answer generation on short tables.
````
[INST]Offer a thorough and accurate solution that directly addresses the Question outlined in the [Question].
### [Table Text]
{table_descriptions}
### [Table]
```
{table_in_csv}
```
### [Question]
{question}
### [Solution][INST/]
````
For more details about how to use TableLLM, please refer to our GitHub page: <https://github.com/TableLLM/TableLLM> |
KalaiselvanD/kalai_bert_model_test_2 | KalaiselvanD | "2024-04-23T07:25:44Z" | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"albert",
"text-classification",
"generated_from_trainer",
"base_model:albert/albert-base-v2",
"base_model:finetune:albert/albert-base-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-04-23T07:25:35Z" | ---
license: apache-2.0
base_model: albert/albert-base-v2
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: kalai_bert_model_test_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kalai_bert_model_test_2
This model is a fine-tuned version of [albert/albert-base-v2](https://huggingface.co/albert/albert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3276
- Accuracy: 0.93
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 25 | 0.5033 | 0.93 |
| No log | 2.0 | 50 | 0.3276 | 0.93 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
bunbohue/zero-shot-prompting-llama2-7b_readsum | bunbohue | "2023-12-17T09:45:46Z" | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | "2023-12-14T12:08:19Z" | ---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: llama2-7b_readme_summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2-7b_readme_summarization
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.14.7
- Tokenizers 0.14.1
|
tsunemoto/Metis-0.4-GGUF | tsunemoto | "2023-12-19T15:48:21Z" | 4 | 0 | null | [
"gguf",
"GGUF",
"en",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2023-12-19T15:39:29Z" | ---
title: "Metis-0.4 Quantized in GGUF"
tags:
- GGUF
language: en
---

# Tsunemoto GGUF's of Metis-0.4
This is a GGUF quantization of Metis-0.4.
## Original Repo Link:
[Original Repository](https://huggingface.co/Mihaiii/Metis-0.4)
## Original Model Card:
---
This is a merge between Metis-0.3 and Metis-0.1 having Metis-0.1 as base.
It was done using [mergekit](https://github.com/cg123/mergekit).
It works well with long system prompts.
It isn't generic in a sense that it shouldn't be used for story telling, for example, but only for reasoning and text comprehension.
This model is trained on a private dataset. The high GSM8K score is **NOT** because of the MetaMath dataset.
# Prompt Format:
```
<|system|>
{system_message} </s>
<|user|>
{prompt} </s>
<|assistant|>
```
Merge config:
```yaml
slices:
- sources:
- model: Mihaiii/Metis-0.3
layer_range: [0, 32]
- model: Mihaiii/Metis-0.1
layer_range: [0, 32]
merge_method: slerp
base_model: Mihaiii/Metis-0.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
``` |
prxy5604/2e6f0fa2-ae7c-4b0e-b0b6-6164a084a63a | prxy5604 | "2025-02-02T11:33:24Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Korabbit/llama-2-ko-7b",
"base_model:adapter:Korabbit/llama-2-ko-7b",
"region:us"
] | null | "2025-02-02T10:09:00Z" | ---
library_name: peft
base_model: Korabbit/llama-2-ko-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2e6f0fa2-ae7c-4b0e-b0b6-6164a084a63a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Korabbit/llama-2-ko-7b
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0e1ed4d2be26c22a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0e1ed4d2be26c22a_train_data.json
type:
field_instruction: Prompt
field_output: Response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: prxy5604/2e6f0fa2-ae7c-4b0e-b0b6-6164a084a63a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/0e1ed4d2be26c22a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 035827d4-9d40-453b-ad95-d8e73c15bed1
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 035827d4-9d40-453b-ad95-d8e73c15bed1
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 2e6f0fa2-ae7c-4b0e-b0b6-6164a084a63a
This model is a fine-tuned version of [Korabbit/llama-2-ko-7b](https://huggingface.co/Korabbit/llama-2-ko-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.5655 | 0.0002 | 1 | 2.7124 |
| 2.5511 | 0.0090 | 50 | 2.1295 |
| 2.5553 | 0.0181 | 100 | 2.0593 |
| 2.5626 | 0.0271 | 150 | 2.0121 |
| 2.4566 | 0.0361 | 200 | 1.9997 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
biustnaspust/alloallo32 | biustnaspust | "2025-04-02T16:35:21Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-02T16:31:01Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/MeliodasPercival_01_Experiment28T3q-GGUF | mradermacher | "2024-12-27T09:08:36Z" | 14 | 0 | transformers | [
"transformers",
"gguf",
"Safetensors",
"text-generation-inference",
"merge",
"en",
"base_model:MaziyarPanahi/MeliodasPercival_01_Experiment28T3q",
"base_model:quantized:MaziyarPanahi/MeliodasPercival_01_Experiment28T3q",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-12-27T08:47:28Z" | ---
base_model: MaziyarPanahi/MeliodasPercival_01_Experiment28T3q
language:
- en
library_name: transformers
license: apache-2.0
model_creator: MaziyarPanahi
model_name: MeliodasPercival_01_Experiment28T3q
quantized_by: mradermacher
tags:
- Safetensors
- text-generation-inference
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/MaziyarPanahi/MeliodasPercival_01_Experiment28T3q
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MeliodasPercival_01_Experiment28T3q-GGUF/resolve/main/MeliodasPercival_01_Experiment28T3q.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/MeliodasPercival_01_Experiment28T3q-GGUF/resolve/main/MeliodasPercival_01_Experiment28T3q.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/MeliodasPercival_01_Experiment28T3q-GGUF/resolve/main/MeliodasPercival_01_Experiment28T3q.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MeliodasPercival_01_Experiment28T3q-GGUF/resolve/main/MeliodasPercival_01_Experiment28T3q.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/MeliodasPercival_01_Experiment28T3q-GGUF/resolve/main/MeliodasPercival_01_Experiment28T3q.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/MeliodasPercival_01_Experiment28T3q-GGUF/resolve/main/MeliodasPercival_01_Experiment28T3q.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MeliodasPercival_01_Experiment28T3q-GGUF/resolve/main/MeliodasPercival_01_Experiment28T3q.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MeliodasPercival_01_Experiment28T3q-GGUF/resolve/main/MeliodasPercival_01_Experiment28T3q.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/MeliodasPercival_01_Experiment28T3q-GGUF/resolve/main/MeliodasPercival_01_Experiment28T3q.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/MeliodasPercival_01_Experiment28T3q-GGUF/resolve/main/MeliodasPercival_01_Experiment28T3q.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MeliodasPercival_01_Experiment28T3q-GGUF/resolve/main/MeliodasPercival_01_Experiment28T3q.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MeliodasPercival_01_Experiment28T3q-GGUF/resolve/main/MeliodasPercival_01_Experiment28T3q.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
lotorpotor-video-link/Ayu-Latifah-viral-video-original-full-link-video-now | lotorpotor-video-link | "2025-03-29T17:10:44Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-03-29T17:10:28Z" | <animated-image data-catalyst=""><a href="https://alltvsteam.com/viral-video/?v=news-es-tvdf" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
mradermacher/smut_mistral_7b02_v0_merged-GGUF | mradermacher | "2024-12-09T03:54:10Z" | 34 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:jspr/smut_mistral_7b02_v0_merged",
"base_model:quantized:jspr/smut_mistral_7b02_v0_merged",
"endpoints_compatible",
"region:us"
] | null | "2024-12-08T23:08:47Z" | ---
base_model: jspr/smut_mistral_7b02_v0_merged
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/jspr/smut_mistral_7b02_v0_merged
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/smut_mistral_7b02_v0_merged-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/smut_mistral_7b02_v0_merged-GGUF/resolve/main/smut_mistral_7b02_v0_merged.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/smut_mistral_7b02_v0_merged-GGUF/resolve/main/smut_mistral_7b02_v0_merged.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/smut_mistral_7b02_v0_merged-GGUF/resolve/main/smut_mistral_7b02_v0_merged.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/smut_mistral_7b02_v0_merged-GGUF/resolve/main/smut_mistral_7b02_v0_merged.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/smut_mistral_7b02_v0_merged-GGUF/resolve/main/smut_mistral_7b02_v0_merged.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/smut_mistral_7b02_v0_merged-GGUF/resolve/main/smut_mistral_7b02_v0_merged.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/smut_mistral_7b02_v0_merged-GGUF/resolve/main/smut_mistral_7b02_v0_merged.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/smut_mistral_7b02_v0_merged-GGUF/resolve/main/smut_mistral_7b02_v0_merged.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/smut_mistral_7b02_v0_merged-GGUF/resolve/main/smut_mistral_7b02_v0_merged.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/smut_mistral_7b02_v0_merged-GGUF/resolve/main/smut_mistral_7b02_v0_merged.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/smut_mistral_7b02_v0_merged-GGUF/resolve/main/smut_mistral_7b02_v0_merged.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/smut_mistral_7b02_v0_merged-GGUF/resolve/main/smut_mistral_7b02_v0_merged.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/smut_mistral_7b02_v0_merged-GGUF/resolve/main/smut_mistral_7b02_v0_merged.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Fetanos/Reinforce-Pixelcopter-PLE-v0 | Fetanos | "2024-05-21T13:29:28Z" | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2024-05-14T14:43:22Z" | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 9.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
datalearningpr/name_to_gender | datalearningpr | "2023-03-25T13:06:16Z" | 34 | 4 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-03-25T12:59:48Z" | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: name_to_gender
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# name_to_gender
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0403
- Accuracy: 0.9917
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0987 | 1.0 | 1200 | 0.0477 | 0.9862 |
| 0.0339 | 2.0 | 2400 | 0.0403 | 0.9917 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
bartelds/group-dro_xlsr_set_2 | bartelds | "2025-03-03T23:41:23Z" | 0 | 0 | null | [
"asr",
"group-dro",
"XLSR",
"multilingual",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2025-03-03T23:40:27Z" | ---
title: "Group-DRO XLSR-based ASR model - set 2"
language: multilingual
tags:
- asr
- group-dro
- XLSR
license: cc-by-nc-4.0
---
# Group-DRO XLSR-based ASR model - set 2
This repository contains a Group-DRO XLSR-based automatic speech recognition (ASR) model trained with ESPnet.
The model was trained on balanced training data from set 2.
## Intended Use
This model is intended for ASR. Users can run inference using the provided checkpoint (`valid.loss.best.pth`) and configuration file (`config.yaml`):
```bash
import soundfile as sf
from espnet2.bin.asr_inference import Speech2Text
asr_train_config = "group-dro_xlsr_set_2/config.yaml"
asr_model_file = "group-dro_xlsr_set_2/valid.loss.best.pth"
model = Speech2Text.from_pretrained(
asr_train_config=asr_train_config,
asr_model_file=asr_model_file
)
speech, _ = sf.read("input.wav")
text, *_ = model(speech)[0]
print("Recognized text:", text)
```
## How to Use
1. Clone this repository.
2. Use ESPnet’s inference scripts with the provided `config.yaml` and checkpoint file.
3. Ensure any external resources referenced in `config.yaml` are available at the indicated relative paths.
|
John6666/llama-tagger-HF-GPTQ-4bits | John6666 | "2024-06-18T01:57:44Z" | 5,753 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2024-06-16T23:36:34Z" | ---
license: mit
inference: false
---
Original model is [here](https://huggingface.co/ooferdoodles/llama-tagger-HF).
|
myselfronin/cso_ner | myselfronin | "2023-12-18T13:12:22Z" | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-12-18T13:10:21Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: cso_ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cso_ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0525
- Precision: 0.7858
- Recall: 0.7174
- F1: 0.7500
- Accuracy: 0.9824
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0914 | 1.0 | 5873 | 0.0822 | 0.7346 | 0.5135 | 0.6045 | 0.9730 |
| 0.069 | 2.0 | 11746 | 0.0653 | 0.7796 | 0.6010 | 0.6787 | 0.9781 |
| 0.057 | 3.0 | 17619 | 0.0594 | 0.7665 | 0.6702 | 0.7151 | 0.9797 |
| 0.0504 | 4.0 | 23492 | 0.0537 | 0.7936 | 0.6982 | 0.7429 | 0.9820 |
| 0.0455 | 5.0 | 29365 | 0.0525 | 0.7858 | 0.7174 | 0.7500 | 0.9824 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Chhabi/test2-nepali-health-llama2-7b | Chhabi | "2024-03-01T23:43:39Z" | 3 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-03-01T23:40:36Z" | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/bigcode_-_starcoderbase-1b-4bits | RichardErkhov | "2025-01-11T07:43:19Z" | 8 | 0 | null | [
"safetensors",
"gpt_bigcode",
"arxiv:1911.02150",
"arxiv:2205.14135",
"arxiv:2207.14255",
"arxiv:2305.06161",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-11T07:42:37Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
starcoderbase-1b - bnb 4bits
- Model creator: https://huggingface.co/bigcode/
- Original model: https://huggingface.co/bigcode/starcoderbase-1b/
Original model description:
---
pipeline_tag: text-generation
inference: true
widget:
- text: 'def print_hello_world():'
example_title: Hello world
group: Python
license: bigcode-openrail-m
datasets:
- bigcode/the-stack-dedup
metrics:
- code_eval
library_name: transformers
tags:
- code
model-index:
- name: StarCoderBase-1B
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 15.17
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (C++)
metrics:
- name: pass@1
type: pass@1
value: 11.68
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Java)
metrics:
- name: pass@1
type: pass@1
value: 14.2
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (JavaScript)
metrics:
- name: pass@1
type: pass@1
value: 13.38
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (PHP)
metrics:
- name: pass@1
type: pass@1
value: 9.94
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Lua)
metrics:
- name: pass@1
type: pass@1
value: 12.52
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Rust)
metrics:
- name: pass@1
type: pass@1
value: 10.24
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Swift)
metrics:
- name: pass@1
type: pass@1
value: 3.92
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (Julia)
metrics:
- name: pass@1
type: pass@1
value: 11.31
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL-HumanEval (R)
metrics:
- name: pass@1
type: pass@1
value: 5.37
verified: false
extra_gated_prompt: >-
## Model License Agreement
Please read the BigCode [OpenRAIL-M
license](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement)
agreement before accepting it.
extra_gated_fields:
I accept the above license agreement, and will use the Model complying with the set of use restrictions and sharing requirements: checkbox
duplicated_from: bigcode-data/starcoderbase-1b
---
# StarCoderBase-1B
1B version of [StarCoderBase](https://huggingface.co/bigcode/starcoderbase).
## Table of Contents
1. [Model Summary](##model-summary)
2. [Use](##use)
3. [Limitations](##limitations)
4. [Training](##training)
5. [License](##license)
6. [Citation](##citation)
## Model Summary
StarCoderBase-1B is a 1B parameter model trained on 80+ programming languages from [The Stack (v1.2)](https://huggingface.co/datasets/bigcode/the-stack), with opt-out requests excluded. The model uses [Multi Query Attention](https://arxiv.org/abs/1911.02150), [a context window of 8192 tokens](https://arxiv.org/abs/2205.14135), and was trained using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255) on 1 trillion tokens.
- **Repository:** [bigcode/Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
- **Project Website:** [bigcode-project.org](https://www.bigcode-project.org)
- **Paper:** [💫StarCoder: May the source be with you!](https://arxiv.org/abs/2305.06161)
- **Point of Contact:** [[email protected]](mailto:[email protected])
- **Languages:** 80+ Programming languages
## Use
### Intended use
The model was trained on GitHub code. As such it is _not_ an instruction model and commands like "Write a function that computes the square root." do not work well. However, by using the [Tech Assistant prompt](https://huggingface.co/datasets/bigcode/ta-prompt) you can turn it into a capable technical assistant.
**Feel free to share your generations in the Community tab!**
### Generation
```python
# pip install -q transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigcode/starcoderbase-1b"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
### Fill-in-the-middle
Fill-in-the-middle uses special tokens to identify the prefix/middle/suffix part of the input and output:
```python
input_text = "<fim_prefix>def print_hello_world():\n <fim_suffix>\n print('Hello world!')<fim_middle>"
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
### Attribution & Other Requirements
The pretraining dataset of the model was filtered for permissive licenses only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. We provide a [search index](https://huggingface.co/spaces/bigcode/starcoder-search) that let's you search through the pretraining data to identify where generated code came from and apply the proper attribution to your code.
# Limitations
The model has been trained on source code from 80+ programming languages. The predominant natural language in source code is English although other languages are also present. As such the model is capable of generating code snippets provided some context but the generated code is not guaranteed to work as intended. It can be inefficient, contain bugs or exploits. See [the paper](https://drive.google.com/file/d/1cN-b9GnWtHzQRoE7M7gAEyivY0kl4BYs/view) for an in-depth discussion of the model limitations.
# Training
## Model
- **Architecture:** GPT-2 model with multi-query attention and Fill-in-the-Middle objective
- **Pretraining steps:** 500k
- **Pretraining tokens:** 1 trillion
- **Precision:** bfloat16
## Hardware
- **GPUs:** 128 Tesla A100
- **Training time:** 11 days
## Software
- **Orchestration:** [Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
- **BP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
# License
The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).
# Citation
```
@article{li2023starcoder,
title={StarCoder: may the source be with you!},
author={Raymond Li and Loubna Ben Allal and Yangtian Zi and Niklas Muennighoff and Denis Kocetkov and Chenghao Mou and Marc Marone and Christopher Akiki and Jia Li and Jenny Chim and Qian Liu and Evgenii Zheltonozhskii and Terry Yue Zhuo and Thomas Wang and Olivier Dehaene and Mishig Davaadorj and Joel Lamy-Poirier and João Monteiro and Oleh Shliazhko and Nicolas Gontier and Nicholas Meade and Armel Zebaze and Ming-Ho Yee and Logesh Kumar Umapathi and Jian Zhu and Benjamin Lipkin and Muhtasham Oblokulov and Zhiruo Wang and Rudra Murthy and Jason Stillerman and Siva Sankalp Patel and Dmitry Abulkhanov and Marco Zocca and Manan Dey and Zhihan Zhang and Nour Fahmy and Urvashi Bhattacharyya and Wenhao Yu and Swayam Singh and Sasha Luccioni and Paulo Villegas and Maxim Kunakov and Fedor Zhdanov and Manuel Romero and Tony Lee and Nadav Timor and Jennifer Ding and Claire Schlesinger and Hailey Schoelkopf and Jan Ebert and Tri Dao and Mayank Mishra and Alex Gu and Jennifer Robinson and Carolyn Jane Anderson and Brendan Dolan-Gavitt and Danish Contractor and Siva Reddy and Daniel Fried and Dzmitry Bahdanau and Yacine Jernite and Carlos Muñoz Ferrandis and Sean Hughes and Thomas Wolf and Arjun Guha and Leandro von Werra and Harm de Vries},
year={2023},
eprint={2305.06161},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
infogeo/a4ae8324-8f43-4e72-b8ea-938124c9ffb2 | infogeo | "2025-02-05T14:56:49Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:01-ai/Yi-1.5-9B-Chat-16K",
"base_model:adapter:01-ai/Yi-1.5-9B-Chat-16K",
"license:apache-2.0",
"region:us"
] | null | "2025-02-05T14:17:48Z" | ---
library_name: peft
license: apache-2.0
base_model: 01-ai/Yi-1.5-9B-Chat-16K
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a4ae8324-8f43-4e72-b8ea-938124c9ffb2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: 01-ai/Yi-1.5-9B-Chat-16K
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 7a9b7e93517dd03f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7a9b7e93517dd03f_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: true
hub_model_id: infogeo/a4ae8324-8f43-4e72-b8ea-938124c9ffb2
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001004
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: linear
max_grad_norm: 1.0
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/7a9b7e93517dd03f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1600e4aa-9898-4b90-be27-589afaed7e49
wandb_project: cold34
wandb_run: your_name
wandb_runid: 1600e4aa-9898-4b90-be27-589afaed7e49
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a4ae8324-8f43-4e72-b8ea-938124c9ffb2
This model is a fine-tuned version of [01-ai/Yi-1.5-9B-Chat-16K](https://huggingface.co/01-ai/Yi-1.5-9B-Chat-16K) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2824
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001004
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7063 | 0.0002 | 1 | 1.4657 |
| 0.4693 | 0.0103 | 50 | 0.3905 |
| 0.2699 | 0.0205 | 100 | 0.3957 |
| 0.3621 | 0.0308 | 150 | 0.3441 |
| 0.3759 | 0.0411 | 200 | 0.2824 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
badokorach/xlm-roberta-base-finetuned-newqa1 | badokorach | "2023-09-06T08:55:47Z" | 104 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | "2023-09-05T18:18:27Z" | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base-finetuned-newqa1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-newqa1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2643
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 66 | 2.8604 |
| No log | 2.0 | 132 | 2.3332 |
| No log | 3.0 | 198 | 2.1159 |
| No log | 4.0 | 264 | 2.1174 |
| No log | 5.0 | 330 | 2.0671 |
| No log | 6.0 | 396 | 2.1269 |
| No log | 7.0 | 462 | 2.2361 |
| 2.2514 | 8.0 | 528 | 2.2171 |
| 2.2514 | 9.0 | 594 | 2.2304 |
| 2.2514 | 10.0 | 660 | 2.2643 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
vaniebermudez/gemma-2b-instruct-ft-derma-qa-finetuning | vaniebermudez | "2024-10-21T03:37:11Z" | 177 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-10-21T03:31:14Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Subsets and Splits