Search is not available for this dataset
modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-11 12:28:23
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 420
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-11 12:28:05
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
0xid/rl_course_vizdoom_health_gathering_supreme | 0xid | "2023-03-09T00:04:13Z" | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-09T00:03:55Z" | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 14.71 +/- 6.66
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r 0xid/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m unit8_doom --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m unit8_doom --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
ecfirst/360VL_PHI | ecfirst | "2024-06-04T15:32:15Z" | 13 | 1 | transformers | [
"transformers",
"safetensors",
"QH_360VL",
"text-generation",
"visual-question-answering",
"custom_code",
"zh",
"en",
"dataset:liuhaotian/LLaVA-CC3M-Pretrain-595K",
"dataset:liuhaotian/LLaVA-Instruct-150K",
"dataset:FreedomIntelligence/ALLaVA-4V-Chinese",
"dataset:shareAI/ShareGPT-Chinese-English-90k",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | visual-question-answering | "2024-05-26T22:26:12Z" | ---
license: apache-2.0
datasets:
- liuhaotian/LLaVA-CC3M-Pretrain-595K
- liuhaotian/LLaVA-Instruct-150K
- FreedomIntelligence/ALLaVA-4V-Chinese
- shareAI/ShareGPT-Chinese-English-90k
language:
- zh
- en
pipeline_tag: visual-question-answering
---
<br>
<br>
# Model Card for 360VL
<p align="center">
<img src="https://github.com/360CVGroup/360VL/blob/master/qh360_vl/360vl.PNG?raw=true" width=100%/>
</p>
**360VL** is developed based on the LLama3 language model and is also the industry's first open source large multi-modal model based on **LLama3-70B**[[🤗Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct)]. In addition to applying the Llama3 language model, the 360VL model also designs a globally aware multi-branch projector architecture, which enables the model to have more sufficient image understanding capabilities.
**Github**:https://github.com/360CVGroup/360VL
## Model Zoo
360VL has released the following versions.
Model | Download
|---|---
360VL-8B | [🤗 Hugging Face](https://huggingface.co/qihoo360/360VL-8B)
360VL-70B | [🤗 Hugging Face](https://huggingface.co/qihoo360/360VL-70B)
## Features
360VL offers the following features:
- Multi-round text-image conversations: 360VL can take both text and images as inputs and produce text outputs. Currently, it supports multi-round visual question answering with one image.
- Bilingual text support: 360VL supports conversations in both English and Chinese, including text recognition in images.
- Strong image comprehension: 360VL is adept at analyzing visuals, making it an efficient tool for tasks like extracting, organizing, and summarizing information from images.
- Fine-grained image resolution: 360VL supports image understanding at a higher resolution of 672×672.
## Performance
| Model | Checkpoints | MMB<sub>T | MMB<sub>D|MMB-CN<sub>T | MMB-CN<sub>D|MMMU<sub>V|MMMU<sub>T| MME |
|:--------------------|:------------:|:----:|:------:|:------:|:-------:|:-------:|:-------:|:-------:|
| QWen-VL-Chat | [🤗LINK](https://huggingface.co/Qwen/Qwen-VL-Chat) | 61.8 | 60.6 | 56.3 | 56.7 |37| 32.9 | 1860 |
| mPLUG-Owl2 | [🤖LINK](https://www.modelscope.cn/models/iic/mPLUG-Owl2/summary) | 66.0 | 66.5 | 60.3 | 59.5 |34.7| 32.1 | 1786.4 |
| CogVLM | [🤗LINK](https://huggingface.co/THUDM/cogvlm-grounding-generalist-hf) | 65.8| 63.7 | 55.9 | 53.8 |37.3| 30.1 | 1736.6|
| Monkey-Chat | [🤗LINK](https://huggingface.co/echo840/Monkey-Chat) | 72.4| 71 | 67.5 | 65.8 |40.7| - | 1887.4|
| MM1-7B-Chat | [LINK](https://ar5iv.labs.arxiv.org/html/2403.09611) | -| 72.3 | - | - |37.0| 35.6 | 1858.2|
| IDEFICS2-8B | [🤗LINK](https://huggingface.co/HuggingFaceM4/idefics2-8b) | 75.7 | 75.3 | 68.6 | 67.3 |43.0| 37.7 |1847.6|
| SVIT-v1.5-13B| [🤗LINK](https://huggingface.co/Isaachhe/svit-v1.5-13b-full) | 69.1 | - | 63.1 | - | 38.0| 33.3|1889|
| LLaVA-v1.5-13B | [🤗LINK](https://huggingface.co/liuhaotian/llava-v1.5-13b) | 69.2 | 69.2 | 65 | 63.6 |36.4| 33.6 | 1826.7|
| LLaVA-v1.6-13B | [🤗LINK](https://huggingface.co/liuhaotian/llava-v1.6-vicuna-13b) | 70 | 70.7 | 68.5 | 64.3 |36.2| - |1901|
| Honeybee | [LINK](https://github.com/kakaobrain/honeybee) | 73.6 | 74.3 | - | - |36.2| -|1976.5|
| YI-VL-34B | [🤗LINK](https://huggingface.co/01-ai/Yi-VL-34B) | 72.4 | 71.1 | 70.7 | 71.4 |45.1| 41.6 |2050.2|
| **360VL-8B** | [🤗LINK](https://huggingface.co/qihoo360/360VL-8B) | 75.3 | 73.7 | 71.1 | 68.6 |39.7| 37.1 | 1944.6|
| **360VL-70B** | [🤗LINK](https://huggingface.co/qihoo360/360VL-70B) | 78.1 | 80.4 | 76.9 | 77.7 |50.8| 44.3 | 2012.3|
## Quick Start 🤗
```Shell
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
from PIL import Image
checkpoint = "qihoo360/360VL-8B"
model = AutoModelForCausalLM.from_pretrained(checkpoint, torch_dtype=torch.float16, device_map='auto', trust_remote_code=True).eval()
tokenizer = AutoTokenizer.from_pretrained(checkpoint, trust_remote_code=True)
vision_tower = model.get_vision_tower()
vision_tower.load_model()
vision_tower.to(device="cuda", dtype=torch.float16)
image_processor = vision_tower.image_processor
tokenizer.pad_token = tokenizer.eos_token
image = Image.open("docs/008.jpg").convert('RGB')
query = "Who is this cartoon character?"
terminators = [
tokenizer.convert_tokens_to_ids("<|eot_id|>",)
]
inputs = model.build_conversation_input_ids(tokenizer, query=query, image=image, image_processor=image_processor)
input_ids = inputs["input_ids"].to(device='cuda', non_blocking=True)
images = inputs["image"].to(dtype=torch.float16, device='cuda', non_blocking=True)
output_ids = model.generate(
input_ids,
images=images,
do_sample=False,
eos_token_id=terminators,
num_beams=1,
max_new_tokens=512,
use_cache=True)
input_token_len = input_ids.shape[1]
outputs = tokenizer.batch_decode(output_ids[:, input_token_len:], skip_special_tokens=True)[0]
outputs = outputs.strip()
print(outputs)
```
**Model type:**
360VL-8B is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data.
It is an auto-regressive language model, based on the transformer architecture.
Base LLM: [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
**Model date:**
360VL-8B was trained in April 2024.
## License
This project utilizes certain datasets and checkpoints that are subject to their respective original licenses. Users must comply with all terms and conditions of these original licenses.
The content of this project itself is licensed under the [Apache license 2.0]
**Where to send questions or comments about the model:**
https://github.com/360CVGroup/360VL
## Related Projects
This work wouldn't be possible without the incredible open-source code of these projects. Huge thanks!
- [Meta Llama 3](https://github.com/meta-llama/llama3)
- [LLaVA: Large Language and Vision Assistant](https://github.com/haotian-liu/LLaVA)
- [Honeybee: Locality-enhanced Projector for Multimodal LLM](https://github.com/kakaobrain/honeybee)
|
masakhane/m2m100_418M_ewe_fr_news | masakhane | "2022-09-24T15:07:42Z" | 102 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"ewe",
"fr",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-04-15T08:40:02Z" | ---
language:
- ewe
- fr
license: afl-3.0
---
|
CyberHarem/makomo_pokemon | CyberHarem | "2023-08-17T17:20:15Z" | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/makomo_pokemon",
"license:mit",
"region:us"
] | text-to-image | "2023-08-17T17:15:45Z" | ---
license: mit
datasets:
- CyberHarem/makomo_pokemon
pipeline_tag: text-to-image
tags:
- art
---
# Lora of makomo_pokemon
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/makomo_pokemon.pt` as the embedding and `1500/makomo_pokemon.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `makomo_pokemon`.**
These are available steps:
| Steps | pattern_1 | bikini | free | nude | Download |
|--------:|:----------------------------------------------------|:-----------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:------------------------------------|
| 1500 | [<NSFW, click to see>](1500/previews/pattern_1.png) |  | [<NSFW, click to see>](1500/previews/free.png) | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/makomo_pokemon.zip) |
| 1400 | [<NSFW, click to see>](1400/previews/pattern_1.png) |  | [<NSFW, click to see>](1400/previews/free.png) | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/makomo_pokemon.zip) |
| 1300 | [<NSFW, click to see>](1300/previews/pattern_1.png) |  | [<NSFW, click to see>](1300/previews/free.png) | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/makomo_pokemon.zip) |
| 1200 | [<NSFW, click to see>](1200/previews/pattern_1.png) |  | [<NSFW, click to see>](1200/previews/free.png) | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/makomo_pokemon.zip) |
| 1100 | [<NSFW, click to see>](1100/previews/pattern_1.png) |  | [<NSFW, click to see>](1100/previews/free.png) | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/makomo_pokemon.zip) |
| 1000 | [<NSFW, click to see>](1000/previews/pattern_1.png) |  | [<NSFW, click to see>](1000/previews/free.png) | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/makomo_pokemon.zip) |
| 900 | [<NSFW, click to see>](900/previews/pattern_1.png) |  | [<NSFW, click to see>](900/previews/free.png) | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/makomo_pokemon.zip) |
| 800 | [<NSFW, click to see>](800/previews/pattern_1.png) |  | [<NSFW, click to see>](800/previews/free.png) | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/makomo_pokemon.zip) |
| 700 | [<NSFW, click to see>](700/previews/pattern_1.png) |  | [<NSFW, click to see>](700/previews/free.png) | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/makomo_pokemon.zip) |
| 600 | [<NSFW, click to see>](600/previews/pattern_1.png) |  | [<NSFW, click to see>](600/previews/free.png) | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/makomo_pokemon.zip) |
| 500 | [<NSFW, click to see>](500/previews/pattern_1.png) |  | [<NSFW, click to see>](500/previews/free.png) | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/makomo_pokemon.zip) |
| 400 | [<NSFW, click to see>](400/previews/pattern_1.png) |  | [<NSFW, click to see>](400/previews/free.png) | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/makomo_pokemon.zip) |
| 300 | [<NSFW, click to see>](300/previews/pattern_1.png) |  | [<NSFW, click to see>](300/previews/free.png) | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/makomo_pokemon.zip) |
| 200 | [<NSFW, click to see>](200/previews/pattern_1.png) |  | [<NSFW, click to see>](200/previews/free.png) | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/makomo_pokemon.zip) |
| 100 | [<NSFW, click to see>](100/previews/pattern_1.png) |  | [<NSFW, click to see>](100/previews/free.png) | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/makomo_pokemon.zip) |
|
hs788/wav2vec2-base-timit-demo-colab | hs788 | "2022-01-07T13:34:11Z" | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-03-02T23:29:05Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4125
- Wer: 0.3607
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.2018 | 7.94 | 500 | 1.3144 | 0.8508 |
| 0.4671 | 15.87 | 1000 | 0.4737 | 0.4160 |
| 0.1375 | 23.81 | 1500 | 0.4125 | 0.3607 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
memevis/king17 | memevis | "2025-04-11T10:21:00Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-11T10:18:11Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Qwen/Qwen2-1.5B-Instruct | Qwen | "2024-06-06T14:36:57Z" | 193,962 | 134 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-03T09:08:12Z" | ---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- chat
---
# Qwen2-1.5B-Instruct
## Introduction
Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the instruction-tuned 1.5B Qwen2 model.
Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/).
<br>
## Model Details
Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.
## Training details
We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization.
## Requirements
The code of Qwen2 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen2-1.5B-Instruct",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-1.5B-Instruct")
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Evaluation
We briefly compare Qwen2-1.5B-Instruct with Qwen1.5-1.8B-Chat. The results are as follows:
| Datasets | Qwen1.5-0.5B-Chat | **Qwen2-0.5B-Instruct** | Qwen1.5-1.8B-Chat | **Qwen2-1.5B-Instruct** |
| :--- | :---: | :---: | :---: | :---: |
| MMLU | 35.0 | **37.9** | 43.7 | **52.4** |
| HumanEval | 9.1 | **17.1** | 25.0 | **37.8** |
| GSM8K | 11.3 | **40.1** | 35.3 | **61.6** |
| C-Eval | 37.2 | **45.2** | 55.3 | **63.8** |
| IFEval (Prompt Strict-Acc.) | 14.6 | **20.0** | 16.8 | **29.0** |
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen2,
title={Qwen2 Technical Report},
year={2024}
}
``` |
EdoAbati/whisper-medium-it | EdoAbati | "2025-03-24T15:15:24Z" | 69 | 2 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"it",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-12-08T11:09:15Z" | Temporary Redirect. Redirecting to /api/resolve-cache/models/EdoAbati/whisper-medium-it/cd7dea8f9d7f7da69b2733c7afd8995311c6945d/README.md?%2FEdoAbati%2Fwhisper-medium-it%2Fresolve%2Fmain%2FREADME.md=&etag=%22234ec412e1d2957eeb70362df6f8832e8c011b41%22 |
gstoica3/roberta-large-peft-rte | gstoica3 | "2023-10-30T20:35:05Z" | 1 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:FacebookAI/roberta-large",
"base_model:adapter:FacebookAI/roberta-large",
"region:us"
] | null | "2023-10-30T20:35:05Z" | ---
library_name: peft
base_model: roberta-large
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
adamo1139/danube3-4b-aezakmi-toxic-2908-gguf | adamo1139 | "2024-08-28T23:37:11Z" | 7 | 0 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-08-28T23:25:32Z" | ---
license: apache-2.0
---
|
MayBashendy/ArabicNewSplits_FineTuningAraBERT_AugV5_k100_task5_organization_fold0 | MayBashendy | "2024-11-27T14:05:07Z" | 165 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-11-27T13:36:05Z" | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits_FineTuningAraBERT_AugV5_k100_task5_organization_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits_FineTuningAraBERT_AugV5_k100_task5_organization_fold0
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5232
- Qwk: 0.1465
- Mse: 3.5232
- Rmse: 1.8770
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0100 | 2 | 2.1960 | 0.1247 | 2.1960 | 1.4819 |
| No log | 0.0199 | 4 | 1.4047 | 0.2449 | 1.4047 | 1.1852 |
| No log | 0.0299 | 6 | 1.3310 | 0.2047 | 1.3310 | 1.1537 |
| No log | 0.0398 | 8 | 1.3627 | 0.1854 | 1.3627 | 1.1674 |
| No log | 0.0498 | 10 | 1.3387 | 0.1546 | 1.3387 | 1.1570 |
| No log | 0.0597 | 12 | 1.3280 | 0.1573 | 1.3280 | 1.1524 |
| No log | 0.0697 | 14 | 1.4205 | 0.1573 | 1.4205 | 1.1919 |
| No log | 0.0796 | 16 | 1.5777 | 0.1852 | 1.5777 | 1.2561 |
| No log | 0.0896 | 18 | 1.6510 | 0.1426 | 1.6510 | 1.2849 |
| No log | 0.0995 | 20 | 1.6921 | 0.1073 | 1.6921 | 1.3008 |
| No log | 0.1095 | 22 | 1.7518 | 0.1073 | 1.7518 | 1.3236 |
| No log | 0.1194 | 24 | 1.8683 | 0.0618 | 1.8683 | 1.3669 |
| No log | 0.1294 | 26 | 1.8955 | 0.0471 | 1.8955 | 1.3768 |
| No log | 0.1393 | 28 | 1.7587 | 0.0897 | 1.7587 | 1.3262 |
| No log | 0.1493 | 30 | 1.7071 | 0.0834 | 1.7071 | 1.3066 |
| No log | 0.1592 | 32 | 1.7136 | 0.0658 | 1.7136 | 1.3091 |
| No log | 0.1692 | 34 | 1.7404 | 0.0658 | 1.7404 | 1.3193 |
| No log | 0.1791 | 36 | 1.8167 | 0.1502 | 1.8167 | 1.3479 |
| No log | 0.1891 | 38 | 1.8957 | 0.1378 | 1.8957 | 1.3768 |
| No log | 0.1990 | 40 | 2.0562 | 0.1741 | 2.0562 | 1.4339 |
| No log | 0.2090 | 42 | 2.1097 | 0.2093 | 2.1097 | 1.4525 |
| No log | 0.2189 | 44 | 2.1607 | 0.2301 | 2.1607 | 1.4699 |
| No log | 0.2289 | 46 | 2.1229 | 0.2348 | 2.1229 | 1.4570 |
| No log | 0.2388 | 48 | 2.0661 | 0.2445 | 2.0661 | 1.4374 |
| No log | 0.2488 | 50 | 2.3392 | 0.1587 | 2.3392 | 1.5294 |
| No log | 0.2587 | 52 | 2.4356 | 0.1338 | 2.4356 | 1.5606 |
| No log | 0.2687 | 54 | 2.4304 | 0.1306 | 2.4304 | 1.5590 |
| No log | 0.2786 | 56 | 2.1688 | 0.2301 | 2.1688 | 1.4727 |
| No log | 0.2886 | 58 | 2.1017 | 0.2372 | 2.1017 | 1.4497 |
| No log | 0.2985 | 60 | 2.2567 | 0.1834 | 2.2567 | 1.5022 |
| No log | 0.3085 | 62 | 2.3310 | 0.1891 | 2.3310 | 1.5268 |
| No log | 0.3184 | 64 | 2.2447 | 0.2094 | 2.2447 | 1.4982 |
| No log | 0.3284 | 66 | 2.5172 | 0.1126 | 2.5172 | 1.5866 |
| No log | 0.3383 | 68 | 2.6999 | 0.0381 | 2.6999 | 1.6431 |
| No log | 0.3483 | 70 | 2.5687 | -0.0042 | 2.5687 | 1.6027 |
| No log | 0.3582 | 72 | 2.2261 | 0.0945 | 2.2261 | 1.4920 |
| No log | 0.3682 | 74 | 1.9774 | 0.1439 | 1.9774 | 1.4062 |
| No log | 0.3781 | 76 | 1.9500 | 0.1672 | 1.9500 | 1.3964 |
| No log | 0.3881 | 78 | 2.1762 | 0.0981 | 2.1762 | 1.4752 |
| No log | 0.3980 | 80 | 2.4534 | 0.0628 | 2.4534 | 1.5663 |
| No log | 0.4080 | 82 | 2.6691 | -0.0164 | 2.6691 | 1.6337 |
| No log | 0.4179 | 84 | 2.7528 | 0.0223 | 2.7528 | 1.6592 |
| No log | 0.4279 | 86 | 2.5074 | 0.0938 | 2.5074 | 1.5835 |
| No log | 0.4378 | 88 | 2.3301 | 0.1805 | 2.3301 | 1.5265 |
| No log | 0.4478 | 90 | 2.3797 | 0.1609 | 2.3797 | 1.5426 |
| No log | 0.4577 | 92 | 2.3412 | 0.1609 | 2.3412 | 1.5301 |
| No log | 0.4677 | 94 | 2.1288 | 0.2513 | 2.1288 | 1.4590 |
| No log | 0.4776 | 96 | 2.2450 | 0.1721 | 2.2450 | 1.4983 |
| No log | 0.4876 | 98 | 2.4321 | 0.1325 | 2.4321 | 1.5595 |
| No log | 0.4975 | 100 | 2.5538 | 0.1044 | 2.5538 | 1.5981 |
| No log | 0.5075 | 102 | 2.6949 | 0.1009 | 2.6949 | 1.6416 |
| No log | 0.5174 | 104 | 2.9818 | 0.1402 | 2.9818 | 1.7268 |
| No log | 0.5274 | 106 | 3.4105 | 0.0573 | 3.4105 | 1.8467 |
| No log | 0.5373 | 108 | 3.5253 | 0.0363 | 3.5253 | 1.8776 |
| No log | 0.5473 | 110 | 3.2640 | 0.0911 | 3.2640 | 1.8066 |
| No log | 0.5572 | 112 | 2.9340 | 0.0881 | 2.9340 | 1.7129 |
| No log | 0.5672 | 114 | 2.9790 | 0.0635 | 2.9790 | 1.7260 |
| No log | 0.5771 | 116 | 3.3609 | 0.0787 | 3.3609 | 1.8333 |
| No log | 0.5871 | 118 | 3.6887 | 0.0185 | 3.6887 | 1.9206 |
| No log | 0.5970 | 120 | 3.4206 | -0.1708 | 3.4206 | 1.8495 |
| No log | 0.6070 | 122 | 3.5436 | -0.0760 | 3.5436 | 1.8824 |
| No log | 0.6169 | 124 | 3.5918 | -0.0048 | 3.5918 | 1.8952 |
| No log | 0.6269 | 126 | 3.2834 | 0.0655 | 3.2834 | 1.8120 |
| No log | 0.6368 | 128 | 3.1803 | 0.1215 | 3.1803 | 1.7833 |
| No log | 0.6468 | 130 | 3.4035 | 0.0979 | 3.4035 | 1.8449 |
| No log | 0.6567 | 132 | 3.5570 | 0.0488 | 3.5570 | 1.8860 |
| No log | 0.6667 | 134 | 3.6432 | 0.0581 | 3.6432 | 1.9087 |
| No log | 0.6766 | 136 | 3.5914 | 0.0608 | 3.5914 | 1.8951 |
| No log | 0.6866 | 138 | 3.4416 | 0.0608 | 3.4416 | 1.8551 |
| No log | 0.6965 | 140 | 3.0425 | 0.0648 | 3.0425 | 1.7443 |
| No log | 0.7065 | 142 | 2.7687 | 0.0826 | 2.7687 | 1.6639 |
| No log | 0.7164 | 144 | 2.7177 | 0.0561 | 2.7177 | 1.6486 |
| No log | 0.7264 | 146 | 2.8646 | 0.0502 | 2.8646 | 1.6925 |
| No log | 0.7363 | 148 | 2.9596 | -0.0087 | 2.9596 | 1.7203 |
| No log | 0.7463 | 150 | 3.0003 | -0.0211 | 3.0003 | 1.7321 |
| No log | 0.7562 | 152 | 3.0617 | 0.0175 | 3.0617 | 1.7498 |
| No log | 0.7662 | 154 | 3.1536 | 0.0175 | 3.1536 | 1.7758 |
| No log | 0.7761 | 156 | 3.1515 | 0.0302 | 3.1515 | 1.7752 |
| No log | 0.7861 | 158 | 3.1808 | 0.0382 | 3.1808 | 1.7835 |
| No log | 0.7960 | 160 | 3.2430 | 0.0011 | 3.2430 | 1.8008 |
| No log | 0.8060 | 162 | 3.1495 | 0.0135 | 3.1495 | 1.7747 |
| No log | 0.8159 | 164 | 3.0067 | 0.0386 | 3.0067 | 1.7340 |
| No log | 0.8259 | 166 | 2.9023 | 0.0899 | 2.9023 | 1.7036 |
| No log | 0.8358 | 168 | 2.7747 | 0.1261 | 2.7747 | 1.6657 |
| No log | 0.8458 | 170 | 2.7913 | 0.1448 | 2.7913 | 1.6707 |
| No log | 0.8557 | 172 | 2.9454 | 0.0936 | 2.9454 | 1.7162 |
| No log | 0.8657 | 174 | 3.0230 | 0.1311 | 3.0230 | 1.7387 |
| No log | 0.8756 | 176 | 3.0119 | 0.1370 | 3.0119 | 1.7355 |
| No log | 0.8856 | 178 | 2.9140 | 0.1370 | 2.9140 | 1.7070 |
| No log | 0.8955 | 180 | 2.7306 | 0.1370 | 2.7306 | 1.6524 |
| No log | 0.9055 | 182 | 2.7588 | 0.1370 | 2.7588 | 1.6610 |
| No log | 0.9154 | 184 | 2.9501 | 0.1294 | 2.9501 | 1.7176 |
| No log | 0.9254 | 186 | 3.1374 | 0.1046 | 3.1374 | 1.7713 |
| No log | 0.9353 | 188 | 3.0951 | 0.0976 | 3.0951 | 1.7593 |
| No log | 0.9453 | 190 | 2.8847 | 0.1294 | 2.8847 | 1.6985 |
| No log | 0.9552 | 192 | 2.7892 | 0.1370 | 2.7892 | 1.6701 |
| No log | 0.9652 | 194 | 2.9187 | 0.1311 | 2.9187 | 1.7084 |
| No log | 0.9751 | 196 | 3.1405 | 0.2214 | 3.1405 | 1.7721 |
| No log | 0.9851 | 198 | 3.2500 | 0.2299 | 3.2500 | 1.8028 |
| No log | 0.9950 | 200 | 3.2694 | 0.2230 | 3.2694 | 1.8082 |
| No log | 1.0050 | 202 | 3.2003 | 0.1388 | 3.2003 | 1.7889 |
| No log | 1.0149 | 204 | 2.9709 | 0.0798 | 2.9709 | 1.7236 |
| No log | 1.0249 | 206 | 2.7235 | 0.1386 | 2.7235 | 1.6503 |
| No log | 1.0348 | 208 | 2.4658 | 0.1587 | 2.4658 | 1.5703 |
| No log | 1.0448 | 210 | 2.3508 | 0.1662 | 2.3508 | 1.5332 |
| No log | 1.0547 | 212 | 2.3910 | 0.1401 | 2.3910 | 1.5463 |
| No log | 1.0647 | 214 | 2.5722 | 0.1447 | 2.5722 | 1.6038 |
| No log | 1.0746 | 216 | 2.8707 | 0.1401 | 2.8707 | 1.6943 |
| No log | 1.0846 | 218 | 3.0846 | 0.1735 | 3.0846 | 1.7563 |
| No log | 1.0945 | 220 | 3.2126 | 0.1602 | 3.2126 | 1.7924 |
| No log | 1.1045 | 222 | 3.2809 | 0.1704 | 3.2809 | 1.8113 |
| No log | 1.1144 | 224 | 3.2677 | 0.1602 | 3.2677 | 1.8077 |
| No log | 1.1244 | 226 | 3.4179 | 0.1545 | 3.4179 | 1.8488 |
| No log | 1.1343 | 228 | 3.3457 | 0.1545 | 3.3457 | 1.8291 |
| No log | 1.1443 | 230 | 3.1163 | 0.1946 | 3.1163 | 1.7653 |
| No log | 1.1542 | 232 | 2.8406 | 0.1741 | 2.8406 | 1.6854 |
| No log | 1.1642 | 234 | 2.8175 | 0.1627 | 2.8175 | 1.6785 |
| No log | 1.1741 | 236 | 2.9605 | 0.1998 | 2.9605 | 1.7206 |
| No log | 1.1841 | 238 | 3.0175 | 0.2099 | 3.0175 | 1.7371 |
| No log | 1.1940 | 240 | 3.1232 | 0.1854 | 3.1232 | 1.7673 |
| No log | 1.2040 | 242 | 3.0914 | 0.1643 | 3.0914 | 1.7582 |
| No log | 1.2139 | 244 | 3.0845 | 0.1669 | 3.0845 | 1.7563 |
| No log | 1.2239 | 246 | 2.8350 | 0.1190 | 2.8350 | 1.6838 |
| No log | 1.2338 | 248 | 2.7043 | 0.1264 | 2.7043 | 1.6445 |
| No log | 1.2438 | 250 | 2.7554 | 0.1222 | 2.7554 | 1.6599 |
| No log | 1.2537 | 252 | 2.7850 | 0.1222 | 2.7850 | 1.6688 |
| No log | 1.2637 | 254 | 2.9631 | 0.1149 | 2.9631 | 1.7214 |
| No log | 1.2736 | 256 | 2.8749 | 0.1149 | 2.8749 | 1.6956 |
| No log | 1.2836 | 258 | 2.6861 | 0.1275 | 2.6861 | 1.6389 |
| No log | 1.2935 | 260 | 2.6920 | 0.1275 | 2.6920 | 1.6407 |
| No log | 1.3035 | 262 | 2.8031 | 0.1275 | 2.8031 | 1.6743 |
| No log | 1.3134 | 264 | 2.8250 | 0.1308 | 2.8250 | 1.6808 |
| No log | 1.3234 | 266 | 2.9085 | 0.1264 | 2.9085 | 1.7054 |
| No log | 1.3333 | 268 | 3.1169 | 0.1149 | 3.1169 | 1.7655 |
| No log | 1.3433 | 270 | 3.3195 | 0.0506 | 3.3195 | 1.8220 |
| No log | 1.3532 | 272 | 3.4382 | 0.1077 | 3.4382 | 1.8542 |
| No log | 1.3632 | 274 | 3.3564 | 0.1389 | 3.3564 | 1.8320 |
| No log | 1.3731 | 276 | 3.2683 | 0.1549 | 3.2683 | 1.8079 |
| No log | 1.3831 | 278 | 3.2822 | 0.1549 | 3.2822 | 1.8117 |
| No log | 1.3930 | 280 | 3.2866 | 0.1812 | 3.2866 | 1.8129 |
| No log | 1.4030 | 282 | 3.3286 | 0.1812 | 3.3286 | 1.8245 |
| No log | 1.4129 | 284 | 3.2477 | 0.1388 | 3.2477 | 1.8021 |
| No log | 1.4229 | 286 | 3.3979 | 0.1092 | 3.3979 | 1.8433 |
| No log | 1.4328 | 288 | 3.4545 | 0.1028 | 3.4545 | 1.8586 |
| No log | 1.4428 | 290 | 3.3875 | 0.1007 | 3.3875 | 1.8405 |
| No log | 1.4527 | 292 | 3.2797 | 0.1429 | 3.2797 | 1.8110 |
| No log | 1.4627 | 294 | 3.2223 | 0.1795 | 3.2223 | 1.7951 |
| No log | 1.4726 | 296 | 3.2881 | 0.1913 | 3.2881 | 1.8133 |
| No log | 1.4826 | 298 | 3.4482 | 0.0943 | 3.4482 | 1.8569 |
| No log | 1.4925 | 300 | 3.5548 | 0.0943 | 3.5548 | 1.8854 |
| No log | 1.5025 | 302 | 3.5646 | 0.0943 | 3.5646 | 1.8880 |
| No log | 1.5124 | 304 | 3.4521 | 0.1058 | 3.4521 | 1.8580 |
| No log | 1.5224 | 306 | 3.2040 | 0.1099 | 3.2040 | 1.7900 |
| No log | 1.5323 | 308 | 3.0972 | 0.1271 | 3.0972 | 1.7599 |
| No log | 1.5423 | 310 | 3.0389 | 0.1149 | 3.0389 | 1.7432 |
| No log | 1.5522 | 312 | 3.0036 | 0.1149 | 3.0036 | 1.7331 |
| No log | 1.5622 | 314 | 3.0326 | 0.1149 | 3.0326 | 1.7414 |
| No log | 1.5721 | 316 | 3.1834 | 0.1271 | 3.1834 | 1.7842 |
| No log | 1.5821 | 318 | 3.2513 | 0.1443 | 3.2513 | 1.8031 |
| No log | 1.5920 | 320 | 3.3827 | 0.2028 | 3.3827 | 1.8392 |
| No log | 1.6020 | 322 | 3.6526 | 0.2057 | 3.6526 | 1.9112 |
| No log | 1.6119 | 324 | 3.7550 | 0.1910 | 3.7550 | 1.9378 |
| No log | 1.6219 | 326 | 3.6337 | 0.1830 | 3.6337 | 1.9062 |
| No log | 1.6318 | 328 | 3.6341 | 0.2101 | 3.6341 | 1.9063 |
| No log | 1.6418 | 330 | 3.4991 | 0.2259 | 3.4991 | 1.8706 |
| No log | 1.6517 | 332 | 3.2242 | 0.1789 | 3.2242 | 1.7956 |
| No log | 1.6617 | 334 | 3.1098 | 0.1343 | 3.1098 | 1.7635 |
| No log | 1.6716 | 336 | 2.9397 | 0.1222 | 2.9397 | 1.7146 |
| No log | 1.6816 | 338 | 2.7726 | 0.1295 | 2.7726 | 1.6651 |
| No log | 1.6915 | 340 | 2.7160 | 0.1477 | 2.7160 | 1.6480 |
| No log | 1.7015 | 342 | 2.8248 | 0.1222 | 2.8248 | 1.6807 |
| No log | 1.7114 | 344 | 2.9734 | 0.1222 | 2.9734 | 1.7243 |
| No log | 1.7214 | 346 | 3.1193 | 0.1149 | 3.1193 | 1.7662 |
| No log | 1.7313 | 348 | 3.0751 | 0.1222 | 3.0751 | 1.7536 |
| No log | 1.7413 | 350 | 3.0150 | 0.1222 | 3.0150 | 1.7364 |
| No log | 1.7512 | 352 | 2.9001 | 0.1222 | 2.9001 | 1.7030 |
| No log | 1.7612 | 354 | 2.7503 | 0.1199 | 2.7503 | 1.6584 |
| No log | 1.7711 | 356 | 2.8396 | 0.1199 | 2.8396 | 1.6851 |
| No log | 1.7811 | 358 | 3.1906 | 0.1387 | 3.1906 | 1.7862 |
| No log | 1.7910 | 360 | 3.3864 | 0.1529 | 3.3864 | 1.8402 |
| No log | 1.8010 | 362 | 3.3029 | 0.0986 | 3.3029 | 1.8174 |
| No log | 1.8109 | 364 | 3.0837 | 0.0806 | 3.0837 | 1.7561 |
| No log | 1.8209 | 366 | 2.9195 | 0.1401 | 2.9195 | 1.7086 |
| No log | 1.8308 | 368 | 2.9219 | 0.1326 | 2.9219 | 1.7093 |
| No log | 1.8408 | 370 | 2.9919 | 0.1401 | 2.9919 | 1.7297 |
| No log | 1.8507 | 372 | 3.1030 | 0.1149 | 3.1030 | 1.7615 |
| No log | 1.8607 | 374 | 3.2338 | 0.1099 | 3.2338 | 1.7983 |
| No log | 1.8706 | 376 | 3.4660 | 0.1464 | 3.4660 | 1.8617 |
| No log | 1.8806 | 378 | 3.6955 | 0.1618 | 3.6955 | 1.9224 |
| No log | 1.8905 | 380 | 3.8042 | 0.1797 | 3.8042 | 1.9504 |
| No log | 1.9005 | 382 | 3.6292 | 0.1921 | 3.6292 | 1.9051 |
| No log | 1.9104 | 384 | 3.4924 | 0.1886 | 3.4924 | 1.8688 |
| No log | 1.9204 | 386 | 3.2568 | 0.1922 | 3.2568 | 1.8047 |
| No log | 1.9303 | 388 | 3.1755 | 0.1046 | 3.1755 | 1.7820 |
| No log | 1.9403 | 390 | 3.1875 | 0.0976 | 3.1875 | 1.7854 |
| No log | 1.9502 | 392 | 3.2339 | 0.1099 | 3.2339 | 1.7983 |
| No log | 1.9602 | 394 | 3.3433 | 0.1215 | 3.3433 | 1.8285 |
| No log | 1.9701 | 396 | 3.3342 | 0.1215 | 3.3342 | 1.8260 |
| No log | 1.9801 | 398 | 3.2893 | 0.1317 | 3.2893 | 1.8136 |
| No log | 1.9900 | 400 | 3.0759 | 0.1222 | 3.0759 | 1.7538 |
| No log | 2.0 | 402 | 2.9565 | 0.1222 | 2.9565 | 1.7194 |
| No log | 2.0100 | 404 | 3.1284 | 0.0976 | 3.1284 | 1.7687 |
| No log | 2.0199 | 406 | 3.3915 | 0.0639 | 3.3915 | 1.8416 |
| No log | 2.0299 | 408 | 3.4469 | 0.0475 | 3.4469 | 1.8566 |
| No log | 2.0398 | 410 | 3.4048 | 0.0639 | 3.4048 | 1.8452 |
| No log | 2.0498 | 412 | 3.2228 | 0.0639 | 3.2228 | 1.7952 |
| No log | 2.0597 | 414 | 3.0660 | 0.0704 | 3.0660 | 1.7510 |
| No log | 2.0697 | 416 | 3.0670 | 0.0668 | 3.0670 | 1.7513 |
| No log | 2.0796 | 418 | 3.2222 | 0.0639 | 3.2222 | 1.7951 |
| No log | 2.0896 | 420 | 3.4735 | 0.0766 | 3.4735 | 1.8637 |
| No log | 2.0995 | 422 | 3.6528 | 0.1318 | 3.6528 | 1.9112 |
| No log | 2.1095 | 424 | 3.7452 | 0.1357 | 3.7452 | 1.9353 |
| No log | 2.1194 | 426 | 3.5617 | 0.1307 | 3.5617 | 1.8872 |
| No log | 2.1294 | 428 | 3.3742 | 0.1111 | 3.3742 | 1.8369 |
| No log | 2.1393 | 430 | 3.4286 | 0.1007 | 3.4286 | 1.8516 |
| No log | 2.1493 | 432 | 3.5617 | 0.1168 | 3.5617 | 1.8873 |
| No log | 2.1592 | 434 | 3.5785 | 0.0943 | 3.5785 | 1.8917 |
| No log | 2.1692 | 436 | 3.5797 | 0.0943 | 3.5797 | 1.8920 |
| No log | 2.1791 | 438 | 3.5680 | 0.0943 | 3.5680 | 1.8889 |
| No log | 2.1891 | 440 | 3.5315 | 0.0943 | 3.5315 | 1.8792 |
| No log | 2.1990 | 442 | 3.3609 | 0.0639 | 3.3609 | 1.8333 |
| No log | 2.2090 | 444 | 3.3091 | 0.0639 | 3.3091 | 1.8191 |
| No log | 2.2189 | 446 | 3.2896 | 0.0639 | 3.2896 | 1.8137 |
| No log | 2.2289 | 448 | 3.2799 | 0.0639 | 3.2799 | 1.8110 |
| No log | 2.2388 | 450 | 3.3672 | 0.0475 | 3.3672 | 1.8350 |
| No log | 2.2488 | 452 | 3.5249 | 0.0604 | 3.5249 | 1.8775 |
| No log | 2.2587 | 454 | 3.7002 | 0.0825 | 3.7002 | 1.9236 |
| No log | 2.2687 | 456 | 3.8291 | 0.1058 | 3.8291 | 1.9568 |
| No log | 2.2786 | 458 | 3.7675 | 0.1058 | 3.7675 | 1.9410 |
| No log | 2.2886 | 460 | 3.7187 | 0.0943 | 3.7187 | 1.9284 |
| No log | 2.2985 | 462 | 3.5901 | 0.0825 | 3.5901 | 1.8948 |
| No log | 2.3085 | 464 | 3.3934 | 0.0604 | 3.3934 | 1.8421 |
| No log | 2.3184 | 466 | 3.2359 | 0.0475 | 3.2359 | 1.7988 |
| No log | 2.3284 | 468 | 3.2064 | 0.0475 | 3.2064 | 1.7906 |
| No log | 2.3383 | 470 | 3.4072 | 0.0475 | 3.4072 | 1.8459 |
| No log | 2.3483 | 472 | 3.5189 | 0.0604 | 3.5189 | 1.8759 |
| No log | 2.3582 | 474 | 3.5186 | 0.0604 | 3.5186 | 1.8758 |
| No log | 2.3682 | 476 | 3.3975 | 0.1052 | 3.3975 | 1.8432 |
| No log | 2.3781 | 478 | 3.2266 | 0.0956 | 3.2266 | 1.7963 |
| No log | 2.3881 | 480 | 3.1666 | 0.0923 | 3.1666 | 1.7795 |
| No log | 2.3980 | 482 | 3.2602 | 0.0989 | 3.2602 | 1.8056 |
| No log | 2.4080 | 484 | 3.4651 | 0.1389 | 3.4651 | 1.8615 |
| No log | 2.4179 | 486 | 3.7047 | 0.1507 | 3.7047 | 1.9248 |
| No log | 2.4279 | 488 | 3.8509 | 0.1370 | 3.8509 | 1.9624 |
| No log | 2.4378 | 490 | 3.7275 | 0.1180 | 3.7275 | 1.9307 |
| No log | 2.4478 | 492 | 3.5512 | 0.0729 | 3.5512 | 1.8845 |
| No log | 2.4577 | 494 | 3.4163 | 0.0766 | 3.4163 | 1.8483 |
| No log | 2.4677 | 496 | 3.2984 | 0.0639 | 3.2984 | 1.8162 |
| No log | 2.4776 | 498 | 3.0868 | 0.0806 | 3.0868 | 1.7569 |
| 0.2753 | 2.4876 | 500 | 3.1102 | 0.0806 | 3.1102 | 1.7636 |
| 0.2753 | 2.4975 | 502 | 3.2582 | 0.0639 | 3.2582 | 1.8051 |
| 0.2753 | 2.5075 | 504 | 3.4059 | 0.0639 | 3.4059 | 1.8455 |
| 0.2753 | 2.5174 | 506 | 3.4965 | 0.0639 | 3.4965 | 1.8699 |
| 0.2753 | 2.5274 | 508 | 3.5809 | 0.0475 | 3.5809 | 1.8923 |
| 0.2753 | 2.5373 | 510 | 3.5075 | 0.0639 | 3.5075 | 1.8728 |
| 0.2753 | 2.5473 | 512 | 3.3383 | 0.0806 | 3.3383 | 1.8271 |
| 0.2753 | 2.5572 | 514 | 3.2206 | 0.0806 | 3.2206 | 1.7946 |
| 0.2753 | 2.5672 | 516 | 3.2074 | 0.0806 | 3.2074 | 1.7909 |
| 0.2753 | 2.5771 | 518 | 3.2756 | 0.0806 | 3.2756 | 1.8099 |
| 0.2753 | 2.5871 | 520 | 3.3911 | 0.0475 | 3.3911 | 1.8415 |
| 0.2753 | 2.5970 | 522 | 3.5985 | 0.0604 | 3.5985 | 1.8970 |
| 0.2753 | 2.6070 | 524 | 3.5889 | 0.0475 | 3.5889 | 1.8944 |
| 0.2753 | 2.6169 | 526 | 3.5131 | 0.0475 | 3.5131 | 1.8743 |
| 0.2753 | 2.6269 | 528 | 3.3958 | 0.0475 | 3.3958 | 1.8428 |
| 0.2753 | 2.6368 | 530 | 3.4500 | 0.0475 | 3.4500 | 1.8574 |
| 0.2753 | 2.6468 | 532 | 3.5744 | 0.0849 | 3.5744 | 1.8906 |
| 0.2753 | 2.6567 | 534 | 3.7348 | 0.1318 | 3.7348 | 1.9326 |
| 0.2753 | 2.6667 | 536 | 3.7207 | 0.0917 | 3.7207 | 1.9289 |
| 0.2753 | 2.6766 | 538 | 3.6858 | 0.0729 | 3.6858 | 1.9198 |
| 0.2753 | 2.6866 | 540 | 3.5377 | 0.0475 | 3.5377 | 1.8809 |
| 0.2753 | 2.6965 | 542 | 3.4520 | 0.0475 | 3.4520 | 1.8580 |
| 0.2753 | 2.7065 | 544 | 3.3554 | 0.0806 | 3.3554 | 1.8318 |
| 0.2753 | 2.7164 | 546 | 3.1911 | 0.0771 | 3.1911 | 1.7864 |
| 0.2753 | 2.7264 | 548 | 3.0292 | 0.1190 | 3.0292 | 1.7405 |
| 0.2753 | 2.7363 | 550 | 3.0258 | 0.1190 | 3.0258 | 1.7395 |
| 0.2753 | 2.7463 | 552 | 3.2185 | 0.0931 | 3.2185 | 1.7940 |
| 0.2753 | 2.7562 | 554 | 3.5608 | 0.1323 | 3.5608 | 1.8870 |
| 0.2753 | 2.7662 | 556 | 3.6213 | 0.1377 | 3.6213 | 1.9030 |
| 0.2753 | 2.7761 | 558 | 3.5042 | 0.0889 | 3.5042 | 1.8720 |
| 0.2753 | 2.7861 | 560 | 3.5021 | 0.0729 | 3.5021 | 1.8714 |
| 0.2753 | 2.7960 | 562 | 3.3487 | 0.0766 | 3.3487 | 1.8299 |
| 0.2753 | 2.8060 | 564 | 3.3425 | 0.0766 | 3.3425 | 1.8282 |
| 0.2753 | 2.8159 | 566 | 3.4236 | 0.1375 | 3.4236 | 1.8503 |
| 0.2753 | 2.8259 | 568 | 3.5008 | 0.1877 | 3.5008 | 1.8710 |
| 0.2753 | 2.8358 | 570 | 3.4479 | 0.1866 | 3.4479 | 1.8569 |
| 0.2753 | 2.8458 | 572 | 3.4292 | 0.2034 | 3.4292 | 1.8518 |
| 0.2753 | 2.8557 | 574 | 3.4950 | 0.2036 | 3.4950 | 1.8695 |
| 0.2753 | 2.8657 | 576 | 3.7241 | 0.1910 | 3.7241 | 1.9298 |
| 0.2753 | 2.8756 | 578 | 3.7250 | 0.1910 | 3.7250 | 1.9300 |
| 0.2753 | 2.8856 | 580 | 3.5713 | 0.1622 | 3.5713 | 1.8898 |
| 0.2753 | 2.8955 | 582 | 3.4691 | 0.1675 | 3.4691 | 1.8625 |
| 0.2753 | 2.9055 | 584 | 3.4469 | 0.1574 | 3.4469 | 1.8566 |
| 0.2753 | 2.9154 | 586 | 3.5141 | 0.1505 | 3.5141 | 1.8746 |
| 0.2753 | 2.9254 | 588 | 3.4711 | 0.1574 | 3.4711 | 1.8631 |
| 0.2753 | 2.9353 | 590 | 3.4983 | 0.1584 | 3.4983 | 1.8704 |
| 0.2753 | 2.9453 | 592 | 3.6035 | 0.1707 | 3.6035 | 1.8983 |
| 0.2753 | 2.9552 | 594 | 3.5212 | 0.1584 | 3.5212 | 1.8765 |
| 0.2753 | 2.9652 | 596 | 3.5444 | 0.1584 | 3.5444 | 1.8826 |
| 0.2753 | 2.9751 | 598 | 3.5384 | 0.1574 | 3.5384 | 1.8811 |
| 0.2753 | 2.9851 | 600 | 3.5548 | 0.1574 | 3.5548 | 1.8854 |
| 0.2753 | 2.9950 | 602 | 3.6214 | 0.1364 | 3.6214 | 1.9030 |
| 0.2753 | 3.0050 | 604 | 3.5112 | 0.1259 | 3.5112 | 1.8738 |
| 0.2753 | 3.0149 | 606 | 3.4265 | 0.1264 | 3.4265 | 1.8511 |
| 0.2753 | 3.0249 | 608 | 3.5313 | 0.1415 | 3.5313 | 1.8792 |
| 0.2753 | 3.0348 | 610 | 3.6801 | 0.1465 | 3.6801 | 1.9184 |
| 0.2753 | 3.0448 | 612 | 3.8465 | 0.1381 | 3.8465 | 1.9612 |
| 0.2753 | 3.0547 | 614 | 3.9150 | 0.1248 | 3.9150 | 1.9786 |
| 0.2753 | 3.0647 | 616 | 4.0940 | 0.1586 | 4.0940 | 2.0234 |
| 0.2753 | 3.0746 | 618 | 4.1073 | 0.1248 | 4.1073 | 2.0267 |
| 0.2753 | 3.0846 | 620 | 3.9175 | 0.0787 | 3.9175 | 1.9793 |
| 0.2753 | 3.0945 | 622 | 3.7090 | 0.0542 | 3.7090 | 1.9259 |
| 0.2753 | 3.1045 | 624 | 3.5662 | 0.0314 | 3.5662 | 1.8885 |
| 0.2753 | 3.1144 | 626 | 3.5099 | 0.0475 | 3.5099 | 1.8735 |
| 0.2753 | 3.1244 | 628 | 3.4015 | 0.0475 | 3.4015 | 1.8443 |
| 0.2753 | 3.1343 | 630 | 3.4013 | 0.0475 | 3.4013 | 1.8443 |
| 0.2753 | 3.1443 | 632 | 3.4116 | 0.0437 | 3.4116 | 1.8470 |
| 0.2753 | 3.1542 | 634 | 3.4648 | 0.0501 | 3.4648 | 1.8614 |
| 0.2753 | 3.1642 | 636 | 3.5855 | 0.0542 | 3.5855 | 1.8935 |
| 0.2753 | 3.1741 | 638 | 3.7185 | 0.0667 | 3.7185 | 1.9284 |
| 0.2753 | 3.1841 | 640 | 3.8232 | 0.1123 | 3.8232 | 1.9553 |
| 0.2753 | 3.1940 | 642 | 3.8529 | 0.1123 | 3.8529 | 1.9629 |
| 0.2753 | 3.2040 | 644 | 3.8201 | 0.0903 | 3.8201 | 1.9545 |
| 0.2753 | 3.2139 | 646 | 3.7680 | 0.0825 | 3.7680 | 1.9411 |
| 0.2753 | 3.2239 | 648 | 3.6730 | 0.0825 | 3.6730 | 1.9165 |
| 0.2753 | 3.2338 | 650 | 3.6195 | 0.0702 | 3.6195 | 1.9025 |
| 0.2753 | 3.2438 | 652 | 3.6072 | 0.0702 | 3.6072 | 1.8993 |
| 0.2753 | 3.2537 | 654 | 3.6639 | 0.0702 | 3.6639 | 1.9141 |
| 0.2753 | 3.2637 | 656 | 3.7105 | 0.0702 | 3.7105 | 1.9263 |
| 0.2753 | 3.2736 | 658 | 3.8076 | 0.0667 | 3.8076 | 1.9513 |
| 0.2753 | 3.2836 | 660 | 3.8872 | 0.1015 | 3.8872 | 1.9716 |
| 0.2753 | 3.2935 | 662 | 3.9295 | 0.1123 | 3.9295 | 1.9823 |
| 0.2753 | 3.3035 | 664 | 3.9410 | 0.1015 | 3.9410 | 1.9852 |
| 0.2753 | 3.3134 | 666 | 3.8801 | 0.0667 | 3.8801 | 1.9698 |
| 0.2753 | 3.3234 | 668 | 3.7340 | 0.0314 | 3.7340 | 1.9324 |
| 0.2753 | 3.3333 | 670 | 3.6398 | 0.0314 | 3.6398 | 1.9078 |
| 0.2753 | 3.3433 | 672 | 3.6334 | 0.0475 | 3.6334 | 1.9062 |
| 0.2753 | 3.3532 | 674 | 3.5834 | 0.0825 | 3.5834 | 1.8930 |
| 0.2753 | 3.3632 | 676 | 3.5828 | 0.0943 | 3.5828 | 1.8928 |
| 0.2753 | 3.3731 | 678 | 3.5539 | 0.0943 | 3.5539 | 1.8852 |
| 0.2753 | 3.3831 | 680 | 3.5011 | 0.0943 | 3.5011 | 1.8711 |
| 0.2753 | 3.3930 | 682 | 3.5363 | 0.0943 | 3.5363 | 1.8805 |
| 0.2753 | 3.4030 | 684 | 3.6130 | 0.0943 | 3.6130 | 1.9008 |
| 0.2753 | 3.4129 | 686 | 3.5481 | 0.0943 | 3.5481 | 1.8836 |
| 0.2753 | 3.4229 | 688 | 3.5020 | 0.0943 | 3.5020 | 1.8714 |
| 0.2753 | 3.4328 | 690 | 3.3804 | 0.1103 | 3.3804 | 1.8386 |
| 0.2753 | 3.4428 | 692 | 3.3529 | 0.1375 | 3.3529 | 1.8311 |
| 0.2753 | 3.4527 | 694 | 3.2574 | 0.1375 | 3.2574 | 1.8048 |
| 0.2753 | 3.4627 | 696 | 3.1596 | 0.1375 | 3.1596 | 1.7775 |
| 0.2753 | 3.4726 | 698 | 3.1322 | 0.1375 | 3.1322 | 1.7698 |
| 0.2753 | 3.4826 | 700 | 3.2854 | 0.1375 | 3.2854 | 1.8126 |
| 0.2753 | 3.4925 | 702 | 3.4445 | 0.1215 | 3.4445 | 1.8559 |
| 0.2753 | 3.5025 | 704 | 3.4620 | 0.1103 | 3.4620 | 1.8607 |
| 0.2753 | 3.5124 | 706 | 3.4674 | 0.1103 | 3.4674 | 1.8621 |
| 0.2753 | 3.5224 | 708 | 3.4187 | 0.0865 | 3.4187 | 1.8490 |
| 0.2753 | 3.5323 | 710 | 3.3580 | 0.0865 | 3.3580 | 1.8325 |
| 0.2753 | 3.5423 | 712 | 3.2059 | 0.0806 | 3.2059 | 1.7905 |
| 0.2753 | 3.5522 | 714 | 3.1134 | 0.0942 | 3.1134 | 1.7645 |
| 0.2753 | 3.5622 | 716 | 3.1328 | 0.0806 | 3.1328 | 1.7700 |
| 0.2753 | 3.5721 | 718 | 3.2355 | 0.0639 | 3.2355 | 1.7987 |
| 0.2753 | 3.5821 | 720 | 3.3721 | 0.0865 | 3.3721 | 1.8363 |
| 0.2753 | 3.5920 | 722 | 3.5582 | 0.0865 | 3.5582 | 1.8863 |
| 0.2753 | 3.6020 | 724 | 3.7361 | 0.1058 | 3.7361 | 1.9329 |
| 0.2753 | 3.6119 | 726 | 3.7744 | 0.1329 | 3.7744 | 1.9428 |
| 0.2753 | 3.6219 | 728 | 3.6798 | 0.1722 | 3.6798 | 1.9183 |
| 0.2753 | 3.6318 | 730 | 3.5309 | 0.1722 | 3.5309 | 1.8791 |
| 0.2753 | 3.6418 | 732 | 3.3759 | 0.1335 | 3.3759 | 1.8374 |
| 0.2753 | 3.6517 | 734 | 3.2743 | 0.0979 | 3.2743 | 1.8095 |
| 0.2753 | 3.6617 | 736 | 3.2057 | 0.0490 | 3.2057 | 1.7904 |
| 0.2753 | 3.6716 | 738 | 3.2580 | 0.0668 | 3.2580 | 1.8050 |
| 0.2753 | 3.6816 | 740 | 3.4201 | 0.0825 | 3.4201 | 1.8494 |
| 0.2753 | 3.6915 | 742 | 3.5938 | 0.0825 | 3.5938 | 1.8957 |
| 0.2753 | 3.7015 | 744 | 3.6947 | 0.0903 | 3.6947 | 1.9222 |
| 0.2753 | 3.7114 | 746 | 3.7346 | 0.1015 | 3.7346 | 1.9325 |
| 0.2753 | 3.7214 | 748 | 3.7015 | 0.1015 | 3.7015 | 1.9239 |
| 0.2753 | 3.7313 | 750 | 3.6106 | 0.1015 | 3.6106 | 1.9002 |
| 0.2753 | 3.7413 | 752 | 3.4268 | 0.1103 | 3.4268 | 1.8512 |
| 0.2753 | 3.7512 | 754 | 3.2776 | 0.1119 | 3.2776 | 1.8104 |
| 0.2753 | 3.7612 | 756 | 3.2213 | 0.1013 | 3.2213 | 1.7948 |
| 0.2753 | 3.7711 | 758 | 3.2489 | 0.0999 | 3.2489 | 1.8025 |
| 0.2753 | 3.7811 | 760 | 3.3077 | 0.0766 | 3.3077 | 1.8187 |
| 0.2753 | 3.7910 | 762 | 3.4107 | 0.1058 | 3.4107 | 1.8468 |
| 0.2753 | 3.8010 | 764 | 3.5854 | 0.1123 | 3.5854 | 1.8935 |
| 0.2753 | 3.8109 | 766 | 3.6222 | 0.0903 | 3.6222 | 1.9032 |
| 0.2753 | 3.8209 | 768 | 3.6739 | 0.0787 | 3.6739 | 1.9167 |
| 0.2753 | 3.8308 | 770 | 3.6356 | 0.0667 | 3.6356 | 1.9067 |
| 0.2753 | 3.8408 | 772 | 3.5077 | 0.0314 | 3.5077 | 1.8729 |
| 0.2753 | 3.8507 | 774 | 3.3506 | 0.0475 | 3.3506 | 1.8305 |
| 0.2753 | 3.8607 | 776 | 3.2342 | 0.0639 | 3.2342 | 1.7984 |
| 0.2753 | 3.8706 | 778 | 3.1749 | 0.0639 | 3.1749 | 1.7818 |
| 0.2753 | 3.8806 | 780 | 3.2212 | 0.0639 | 3.2212 | 1.7948 |
| 0.2753 | 3.8905 | 782 | 3.3624 | 0.1215 | 3.3624 | 1.8337 |
| 0.2753 | 3.9005 | 784 | 3.5349 | 0.1627 | 3.5349 | 1.8801 |
| 0.2753 | 3.9104 | 786 | 3.7142 | 0.1763 | 3.7142 | 1.9272 |
| 0.2753 | 3.9204 | 788 | 3.7673 | 0.1484 | 3.7673 | 1.9410 |
| 0.2753 | 3.9303 | 790 | 3.7488 | 0.1392 | 3.7488 | 1.9362 |
| 0.2753 | 3.9403 | 792 | 3.6380 | 0.1512 | 3.6380 | 1.9074 |
| 0.2753 | 3.9502 | 794 | 3.5189 | 0.1315 | 3.5189 | 1.8759 |
| 0.2753 | 3.9602 | 796 | 3.3709 | 0.1493 | 3.3709 | 1.8360 |
| 0.2753 | 3.9701 | 798 | 3.3371 | 0.1684 | 3.3371 | 1.8268 |
| 0.2753 | 3.9801 | 800 | 3.3025 | 0.1684 | 3.3025 | 1.8173 |
| 0.2753 | 3.9900 | 802 | 3.2691 | 0.1660 | 3.2691 | 1.8081 |
| 0.2753 | 4.0 | 804 | 3.2924 | 0.1467 | 3.2924 | 1.8145 |
| 0.2753 | 4.0100 | 806 | 3.3302 | 0.1521 | 3.3302 | 1.8249 |
| 0.2753 | 4.0199 | 808 | 3.3963 | 0.1584 | 3.3963 | 1.8429 |
| 0.2753 | 4.0299 | 810 | 3.4944 | 0.1761 | 3.4944 | 1.8693 |
| 0.2753 | 4.0398 | 812 | 3.6567 | 0.1622 | 3.6567 | 1.9122 |
| 0.2753 | 4.0498 | 814 | 3.7405 | 0.1184 | 3.7405 | 1.9340 |
| 0.2753 | 4.0597 | 816 | 3.7613 | 0.1063 | 3.7613 | 1.9394 |
| 0.2753 | 4.0697 | 818 | 3.7210 | 0.0842 | 3.7210 | 1.9290 |
| 0.2753 | 4.0796 | 820 | 3.6214 | 0.0511 | 3.6214 | 1.9030 |
| 0.2753 | 4.0896 | 822 | 3.4958 | 0.0542 | 3.4958 | 1.8697 |
| 0.2753 | 4.0995 | 824 | 3.3781 | 0.0702 | 3.3781 | 1.8380 |
| 0.2753 | 4.1095 | 826 | 3.3281 | 0.0865 | 3.3281 | 1.8243 |
| 0.2753 | 4.1194 | 828 | 3.3654 | 0.0931 | 3.3654 | 1.8345 |
| 0.2753 | 4.1294 | 830 | 3.4087 | 0.1052 | 3.4087 | 1.8463 |
| 0.2753 | 4.1393 | 832 | 3.4217 | 0.1743 | 3.4217 | 1.8498 |
| 0.2753 | 4.1493 | 834 | 3.3489 | 0.2005 | 3.3489 | 1.8300 |
| 0.2753 | 4.1592 | 836 | 3.3853 | 0.2181 | 3.3853 | 1.8399 |
| 0.2753 | 4.1692 | 838 | 3.4582 | 0.1860 | 3.4582 | 1.8596 |
| 0.2753 | 4.1791 | 840 | 3.4828 | 0.1860 | 3.4828 | 1.8662 |
| 0.2753 | 4.1891 | 842 | 3.4350 | 0.1860 | 3.4350 | 1.8534 |
| 0.2753 | 4.1990 | 844 | 3.4512 | 0.1875 | 3.4512 | 1.8577 |
| 0.2753 | 4.2090 | 846 | 3.4341 | 0.2114 | 3.4341 | 1.8531 |
| 0.2753 | 4.2189 | 848 | 3.3095 | 0.1913 | 3.3095 | 1.8192 |
| 0.2753 | 4.2289 | 850 | 3.1644 | 0.1108 | 3.1644 | 1.7789 |
| 0.2753 | 4.2388 | 852 | 3.1698 | 0.0989 | 3.1698 | 1.7804 |
| 0.2753 | 4.2488 | 854 | 3.2080 | 0.1021 | 3.2080 | 1.7911 |
| 0.2753 | 4.2587 | 856 | 3.2542 | 0.0889 | 3.2542 | 1.8039 |
| 0.2753 | 4.2687 | 858 | 3.3383 | 0.0889 | 3.3383 | 1.8271 |
| 0.2753 | 4.2786 | 860 | 3.3343 | 0.1007 | 3.3343 | 1.8260 |
| 0.2753 | 4.2886 | 862 | 3.3995 | 0.1121 | 3.3995 | 1.8438 |
| 0.2753 | 4.2985 | 864 | 3.3558 | 0.1007 | 3.3558 | 1.8319 |
| 0.2753 | 4.3085 | 866 | 3.3301 | 0.1007 | 3.3301 | 1.8249 |
| 0.2753 | 4.3184 | 868 | 3.2738 | 0.1218 | 3.2738 | 1.8094 |
| 0.2753 | 4.3284 | 870 | 3.1239 | 0.0873 | 3.1239 | 1.7675 |
| 0.2753 | 4.3383 | 872 | 2.9207 | 0.0804 | 2.9207 | 1.7090 |
| 0.2753 | 4.3483 | 874 | 2.8400 | 0.0804 | 2.8400 | 1.6852 |
| 0.2753 | 4.3582 | 876 | 2.8679 | 0.0804 | 2.8679 | 1.6935 |
| 0.2753 | 4.3682 | 878 | 2.9796 | 0.0804 | 2.9796 | 1.7262 |
| 0.2753 | 4.3781 | 880 | 3.1065 | 0.1275 | 3.1065 | 1.7625 |
| 0.2753 | 4.3881 | 882 | 3.3063 | 0.1841 | 3.3063 | 1.8183 |
| 0.2753 | 4.3980 | 884 | 3.4369 | 0.1613 | 3.4369 | 1.8539 |
| 0.2753 | 4.4080 | 886 | 3.4634 | 0.1777 | 3.4634 | 1.8610 |
| 0.2753 | 4.4179 | 888 | 3.4242 | 0.1777 | 3.4242 | 1.8505 |
| 0.2753 | 4.4279 | 890 | 3.3840 | 0.1777 | 3.3840 | 1.8396 |
| 0.2753 | 4.4378 | 892 | 3.3773 | 0.1764 | 3.3773 | 1.8378 |
| 0.2753 | 4.4478 | 894 | 3.3828 | 0.1669 | 3.3828 | 1.8392 |
| 0.2753 | 4.4577 | 896 | 3.4395 | 0.1618 | 3.4395 | 1.8546 |
| 0.2753 | 4.4677 | 898 | 3.4156 | 0.1428 | 3.4156 | 1.8481 |
| 0.2753 | 4.4776 | 900 | 3.4162 | 0.0571 | 3.4162 | 1.8483 |
| 0.2753 | 4.4876 | 902 | 3.3953 | 0.0445 | 3.3953 | 1.8426 |
| 0.2753 | 4.4975 | 904 | 3.3197 | 0.0314 | 3.3197 | 1.8220 |
| 0.2753 | 4.5075 | 906 | 3.2645 | 0.0475 | 3.2645 | 1.8068 |
| 0.2753 | 4.5174 | 908 | 3.1938 | 0.0475 | 3.1938 | 1.7871 |
| 0.2753 | 4.5274 | 910 | 3.1546 | 0.0704 | 3.1546 | 1.7761 |
| 0.2753 | 4.5373 | 912 | 3.1406 | 0.0839 | 3.1406 | 1.7722 |
| 0.2753 | 4.5473 | 914 | 3.1694 | 0.0966 | 3.1694 | 1.7803 |
| 0.2753 | 4.5572 | 916 | 3.1846 | 0.1258 | 3.1846 | 1.7845 |
| 0.2753 | 4.5672 | 918 | 3.2060 | 0.1979 | 3.2060 | 1.7905 |
| 0.2753 | 4.5771 | 920 | 3.2969 | 0.1957 | 3.2969 | 1.8157 |
| 0.2753 | 4.5871 | 922 | 3.4440 | 0.1913 | 3.4440 | 1.8558 |
| 0.2753 | 4.5970 | 924 | 3.4237 | 0.1792 | 3.4237 | 1.8503 |
| 0.2753 | 4.6070 | 926 | 3.4568 | 0.1542 | 3.4568 | 1.8593 |
| 0.2753 | 4.6169 | 928 | 3.4585 | 0.1467 | 3.4585 | 1.8597 |
| 0.2753 | 4.6269 | 930 | 3.5295 | 0.1352 | 3.5295 | 1.8787 |
| 0.2753 | 4.6368 | 932 | 3.6202 | 0.1157 | 3.6202 | 1.9027 |
| 0.2753 | 4.6468 | 934 | 3.5937 | 0.1028 | 3.5937 | 1.8957 |
| 0.2753 | 4.6567 | 936 | 3.5610 | 0.1028 | 3.5610 | 1.8871 |
| 0.2753 | 4.6667 | 938 | 3.5121 | 0.1038 | 3.5121 | 1.8741 |
| 0.2753 | 4.6766 | 940 | 3.4424 | 0.1198 | 3.4424 | 1.8554 |
| 0.2753 | 4.6866 | 942 | 3.3874 | 0.1429 | 3.3874 | 1.8405 |
| 0.2753 | 4.6965 | 944 | 3.3404 | 0.1317 | 3.3404 | 1.8277 |
| 0.2753 | 4.7065 | 946 | 3.2767 | 0.1317 | 3.2767 | 1.8102 |
| 0.2753 | 4.7164 | 948 | 3.2516 | 0.1429 | 3.2516 | 1.8032 |
| 0.2753 | 4.7264 | 950 | 3.2444 | 0.1687 | 3.2444 | 1.8012 |
| 0.2753 | 4.7363 | 952 | 3.2723 | 0.1909 | 3.2723 | 1.8090 |
| 0.2753 | 4.7463 | 954 | 3.2497 | 0.2054 | 3.2497 | 1.8027 |
| 0.2753 | 4.7562 | 956 | 3.2804 | 0.2143 | 3.2804 | 1.8112 |
| 0.2753 | 4.7662 | 958 | 3.2564 | 0.1988 | 3.2564 | 1.8045 |
| 0.2753 | 4.7761 | 960 | 3.1985 | 0.2029 | 3.1985 | 1.7884 |
| 0.2753 | 4.7861 | 962 | 3.2147 | 0.2029 | 3.2147 | 1.7930 |
| 0.2753 | 4.7960 | 964 | 3.2298 | 0.1750 | 3.2298 | 1.7972 |
| 0.2753 | 4.8060 | 966 | 3.2409 | 0.1909 | 3.2409 | 1.8003 |
| 0.2753 | 4.8159 | 968 | 3.3343 | 0.2093 | 3.3343 | 1.8260 |
| 0.2753 | 4.8259 | 970 | 3.4026 | 0.1846 | 3.4026 | 1.8446 |
| 0.2753 | 4.8358 | 972 | 3.5086 | 0.1728 | 3.5086 | 1.8731 |
| 0.2753 | 4.8458 | 974 | 3.5562 | 0.1820 | 3.5562 | 1.8858 |
| 0.2753 | 4.8557 | 976 | 3.6528 | 0.1761 | 3.6528 | 1.9112 |
| 0.2753 | 4.8657 | 978 | 3.6855 | 0.1626 | 3.6855 | 1.9198 |
| 0.2753 | 4.8756 | 980 | 3.5890 | 0.1601 | 3.5890 | 1.8945 |
| 0.2753 | 4.8856 | 982 | 3.3888 | 0.1308 | 3.3888 | 1.8409 |
| 0.2753 | 4.8955 | 984 | 3.1675 | 0.1099 | 3.1675 | 1.7797 |
| 0.2753 | 4.9055 | 986 | 3.0248 | 0.1169 | 3.0248 | 1.7392 |
| 0.2753 | 4.9154 | 988 | 3.0402 | 0.1169 | 3.0402 | 1.7436 |
| 0.2753 | 4.9254 | 990 | 3.0727 | 0.1607 | 3.0727 | 1.7529 |
| 0.2753 | 4.9353 | 992 | 3.1733 | 0.1909 | 3.1733 | 1.7814 |
| 0.2753 | 4.9453 | 994 | 3.2912 | 0.1909 | 3.2912 | 1.8142 |
| 0.2753 | 4.9552 | 996 | 3.3416 | 0.1935 | 3.3416 | 1.8280 |
| 0.2753 | 4.9652 | 998 | 3.2984 | 0.1909 | 3.2984 | 1.8162 |
| 0.0533 | 4.9751 | 1000 | 3.1997 | 0.1712 | 3.1997 | 1.7888 |
| 0.0533 | 4.9851 | 1002 | 3.0928 | 0.1209 | 3.0928 | 1.7586 |
| 0.0533 | 4.9950 | 1004 | 3.0025 | 0.1084 | 3.0025 | 1.7328 |
| 0.0533 | 5.0050 | 1006 | 2.9815 | 0.1124 | 2.9815 | 1.7267 |
| 0.0533 | 5.0149 | 1008 | 3.0511 | 0.1084 | 3.0511 | 1.7468 |
| 0.0533 | 5.0249 | 1010 | 3.1762 | 0.0908 | 3.1762 | 1.7822 |
| 0.0533 | 5.0348 | 1012 | 3.2893 | 0.0873 | 3.2893 | 1.8136 |
| 0.0533 | 5.0448 | 1014 | 3.3772 | 0.0704 | 3.3772 | 1.8377 |
| 0.0533 | 5.0547 | 1016 | 3.3880 | 0.0704 | 3.3880 | 1.8407 |
| 0.0533 | 5.0647 | 1018 | 3.3550 | 0.0704 | 3.3550 | 1.8317 |
| 0.0533 | 5.0746 | 1020 | 3.3016 | 0.0704 | 3.3016 | 1.8170 |
| 0.0533 | 5.0846 | 1022 | 3.2474 | 0.0704 | 3.2474 | 1.8020 |
| 0.0533 | 5.0945 | 1024 | 3.2269 | 0.0704 | 3.2269 | 1.7964 |
| 0.0533 | 5.1045 | 1026 | 3.2900 | 0.1072 | 3.2900 | 1.8138 |
| 0.0533 | 5.1144 | 1028 | 3.3332 | 0.1584 | 3.3332 | 1.8257 |
| 0.0533 | 5.1244 | 1030 | 3.4004 | 0.1684 | 3.4004 | 1.8440 |
| 0.0533 | 5.1343 | 1032 | 3.4440 | 0.1618 | 3.4440 | 1.8558 |
| 0.0533 | 5.1443 | 1034 | 3.5343 | 0.1714 | 3.5343 | 1.8800 |
| 0.0533 | 5.1542 | 1036 | 3.5463 | 0.1618 | 3.5463 | 1.8832 |
| 0.0533 | 5.1642 | 1038 | 3.4659 | 0.1618 | 3.4659 | 1.8617 |
| 0.0533 | 5.1741 | 1040 | 3.3957 | 0.1415 | 3.3957 | 1.8427 |
| 0.0533 | 5.1841 | 1042 | 3.4241 | 0.1415 | 3.4241 | 1.8504 |
| 0.0533 | 5.1940 | 1044 | 3.4504 | 0.1308 | 3.4504 | 1.8575 |
| 0.0533 | 5.2040 | 1046 | 3.4369 | 0.1308 | 3.4369 | 1.8539 |
| 0.0533 | 5.2139 | 1048 | 3.4163 | 0.1103 | 3.4163 | 1.8483 |
| 0.0533 | 5.2239 | 1050 | 3.3651 | 0.1103 | 3.3651 | 1.8344 |
| 0.0533 | 5.2338 | 1052 | 3.3200 | 0.1308 | 3.3200 | 1.8221 |
| 0.0533 | 5.2438 | 1054 | 3.3288 | 0.1518 | 3.3288 | 1.8245 |
| 0.0533 | 5.2537 | 1056 | 3.3783 | 0.1618 | 3.3783 | 1.8380 |
| 0.0533 | 5.2637 | 1058 | 3.4229 | 0.1618 | 3.4229 | 1.8501 |
| 0.0533 | 5.2736 | 1060 | 3.4939 | 0.1714 | 3.4939 | 1.8692 |
| 0.0533 | 5.2836 | 1062 | 3.5384 | 0.1807 | 3.5384 | 1.8811 |
| 0.0533 | 5.2935 | 1064 | 3.6223 | 0.1898 | 3.6223 | 1.9032 |
| 0.0533 | 5.3035 | 1066 | 3.6798 | 0.1807 | 3.6798 | 1.9183 |
| 0.0533 | 5.3134 | 1068 | 3.7246 | 0.1563 | 3.7246 | 1.9299 |
| 0.0533 | 5.3234 | 1070 | 3.7672 | 0.1415 | 3.7672 | 1.9409 |
| 0.0533 | 5.3333 | 1072 | 3.7201 | 0.1315 | 3.7201 | 1.9288 |
| 0.0533 | 5.3433 | 1074 | 3.6176 | 0.0986 | 3.6176 | 1.9020 |
| 0.0533 | 5.3532 | 1076 | 3.4821 | 0.0639 | 3.4821 | 1.8660 |
| 0.0533 | 5.3632 | 1078 | 3.3277 | 0.0704 | 3.3277 | 1.8242 |
| 0.0533 | 5.3731 | 1080 | 3.2477 | 0.0704 | 3.2477 | 1.8022 |
| 0.0533 | 5.3831 | 1082 | 3.2544 | 0.0704 | 3.2544 | 1.8040 |
| 0.0533 | 5.3930 | 1084 | 3.2413 | 0.0704 | 3.2413 | 1.8004 |
| 0.0533 | 5.4030 | 1086 | 3.2498 | 0.0704 | 3.2498 | 1.8027 |
| 0.0533 | 5.4129 | 1088 | 3.2458 | 0.0704 | 3.2458 | 1.8016 |
| 0.0533 | 5.4229 | 1090 | 3.2557 | 0.0704 | 3.2557 | 1.8044 |
| 0.0533 | 5.4328 | 1092 | 3.2979 | 0.0639 | 3.2979 | 1.8160 |
| 0.0533 | 5.4428 | 1094 | 3.3551 | 0.0639 | 3.3551 | 1.8317 |
| 0.0533 | 5.4527 | 1096 | 3.3971 | 0.0639 | 3.3971 | 1.8431 |
| 0.0533 | 5.4627 | 1098 | 3.5102 | 0.0639 | 3.5102 | 1.8736 |
| 0.0533 | 5.4726 | 1100 | 3.6329 | 0.1215 | 3.6329 | 1.9060 |
| 0.0533 | 5.4826 | 1102 | 3.6627 | 0.1618 | 3.6627 | 1.9138 |
| 0.0533 | 5.4925 | 1104 | 3.6626 | 0.1807 | 3.6626 | 1.9138 |
| 0.0533 | 5.5025 | 1106 | 3.6262 | 0.1807 | 3.6262 | 1.9043 |
| 0.0533 | 5.5124 | 1108 | 3.5610 | 0.1872 | 3.5610 | 1.8871 |
| 0.0533 | 5.5224 | 1110 | 3.5043 | 0.1872 | 3.5043 | 1.8720 |
| 0.0533 | 5.5323 | 1112 | 3.4708 | 0.1780 | 3.4708 | 1.8630 |
| 0.0533 | 5.5423 | 1114 | 3.4169 | 0.1594 | 3.4169 | 1.8485 |
| 0.0533 | 5.5522 | 1116 | 3.3787 | 0.1750 | 3.3787 | 1.8381 |
| 0.0533 | 5.5622 | 1118 | 3.3745 | 0.1750 | 3.3745 | 1.8370 |
| 0.0533 | 5.5721 | 1120 | 3.4464 | 0.1780 | 3.4464 | 1.8564 |
| 0.0533 | 5.5821 | 1122 | 3.5440 | 0.1872 | 3.5440 | 1.8825 |
| 0.0533 | 5.5920 | 1124 | 3.6750 | 0.1658 | 3.6750 | 1.9170 |
| 0.0533 | 5.6020 | 1126 | 3.7308 | 0.1512 | 3.7308 | 1.9315 |
| 0.0533 | 5.6119 | 1128 | 3.7424 | 0.1415 | 3.7424 | 1.9345 |
| 0.0533 | 5.6219 | 1130 | 3.7856 | 0.1415 | 3.7856 | 1.9457 |
| 0.0533 | 5.6318 | 1132 | 3.7787 | 0.1415 | 3.7787 | 1.9439 |
| 0.0533 | 5.6418 | 1134 | 3.8064 | 0.1415 | 3.8064 | 1.9510 |
| 0.0533 | 5.6517 | 1136 | 3.7813 | 0.1415 | 3.7813 | 1.9446 |
| 0.0533 | 5.6617 | 1138 | 3.7126 | 0.1315 | 3.7126 | 1.9268 |
| 0.0533 | 5.6716 | 1140 | 3.6571 | 0.1315 | 3.6571 | 1.9124 |
| 0.0533 | 5.6816 | 1142 | 3.6554 | 0.1315 | 3.6554 | 1.9119 |
| 0.0533 | 5.6915 | 1144 | 3.6042 | 0.1212 | 3.6042 | 1.8985 |
| 0.0533 | 5.7015 | 1146 | 3.5073 | 0.1105 | 3.5073 | 1.8728 |
| 0.0533 | 5.7114 | 1148 | 3.3769 | 0.1323 | 3.3769 | 1.8376 |
| 0.0533 | 5.7214 | 1150 | 3.2837 | 0.1743 | 3.2837 | 1.8121 |
| 0.0533 | 5.7313 | 1152 | 3.2446 | 0.1720 | 3.2446 | 1.8013 |
| 0.0533 | 5.7413 | 1154 | 3.2462 | 0.1696 | 3.2462 | 1.8017 |
| 0.0533 | 5.7512 | 1156 | 3.2738 | 0.1957 | 3.2738 | 1.8094 |
| 0.0533 | 5.7612 | 1158 | 3.3607 | 0.1935 | 3.3607 | 1.8332 |
| 0.0533 | 5.7711 | 1160 | 3.4666 | 0.1872 | 3.4666 | 1.8619 |
| 0.0533 | 5.7811 | 1162 | 3.5811 | 0.1872 | 3.5811 | 1.8924 |
| 0.0533 | 5.7910 | 1164 | 3.7113 | 0.1728 | 3.7113 | 1.9265 |
| 0.0533 | 5.8010 | 1166 | 3.7613 | 0.1438 | 3.7613 | 1.9394 |
| 0.0533 | 5.8109 | 1168 | 3.7244 | 0.1644 | 3.7244 | 1.9299 |
| 0.0533 | 5.8209 | 1170 | 3.6678 | 0.1563 | 3.6678 | 1.9152 |
| 0.0533 | 5.8308 | 1172 | 3.6146 | 0.1465 | 3.6146 | 1.9012 |
| 0.0533 | 5.8408 | 1174 | 3.5968 | 0.1465 | 3.5968 | 1.8965 |
| 0.0533 | 5.8507 | 1176 | 3.6282 | 0.1315 | 3.6282 | 1.9048 |
| 0.0533 | 5.8607 | 1178 | 3.6452 | 0.1063 | 3.6452 | 1.9092 |
| 0.0533 | 5.8706 | 1180 | 3.6909 | 0.1063 | 3.6909 | 1.9212 |
| 0.0533 | 5.8806 | 1182 | 3.7496 | 0.1168 | 3.7496 | 1.9364 |
| 0.0533 | 5.8905 | 1184 | 3.7997 | 0.1168 | 3.7997 | 1.9493 |
| 0.0533 | 5.9005 | 1186 | 3.8538 | 0.1269 | 3.8537 | 1.9631 |
| 0.0533 | 5.9104 | 1188 | 3.8850 | 0.1050 | 3.8850 | 1.9710 |
| 0.0533 | 5.9204 | 1190 | 3.8008 | 0.1050 | 3.8008 | 1.9496 |
| 0.0533 | 5.9303 | 1192 | 3.7330 | 0.1050 | 3.7330 | 1.9321 |
| 0.0533 | 5.9403 | 1194 | 3.7381 | 0.1189 | 3.7381 | 1.9334 |
| 0.0533 | 5.9502 | 1196 | 3.8198 | 0.1348 | 3.8198 | 1.9544 |
| 0.0533 | 5.9602 | 1198 | 3.8390 | 0.1309 | 3.8390 | 1.9593 |
| 0.0533 | 5.9701 | 1200 | 3.8190 | 0.1309 | 3.8190 | 1.9542 |
| 0.0533 | 5.9801 | 1202 | 3.7790 | 0.1131 | 3.7790 | 1.9440 |
| 0.0533 | 5.9900 | 1204 | 3.7277 | 0.1261 | 3.7277 | 1.9307 |
| 0.0533 | 6.0 | 1206 | 3.6484 | 0.1509 | 3.6484 | 1.9101 |
| 0.0533 | 6.0100 | 1208 | 3.6113 | 0.1499 | 3.6113 | 1.9003 |
| 0.0533 | 6.0199 | 1210 | 3.6191 | 0.1509 | 3.6191 | 1.9024 |
| 0.0533 | 6.0299 | 1212 | 3.6616 | 0.1370 | 3.6616 | 1.9135 |
| 0.0533 | 6.0398 | 1214 | 3.6730 | 0.1370 | 3.6730 | 1.9165 |
| 0.0533 | 6.0498 | 1216 | 3.6844 | 0.1460 | 3.6844 | 1.9195 |
| 0.0533 | 6.0597 | 1218 | 3.6350 | 0.1592 | 3.6350 | 1.9066 |
| 0.0533 | 6.0697 | 1220 | 3.5905 | 0.1802 | 3.5905 | 1.8949 |
| 0.0533 | 6.0796 | 1222 | 3.5262 | 0.1618 | 3.5262 | 1.8778 |
| 0.0533 | 6.0896 | 1224 | 3.4552 | 0.1773 | 3.4552 | 1.8588 |
| 0.0533 | 6.0995 | 1226 | 3.4195 | 0.1675 | 3.4195 | 1.8492 |
| 0.0533 | 6.1095 | 1228 | 3.4064 | 0.1469 | 3.4064 | 1.8456 |
| 0.0533 | 6.1194 | 1230 | 3.4192 | 0.1264 | 3.4192 | 1.8491 |
| 0.0533 | 6.1294 | 1232 | 3.4083 | 0.1150 | 3.4083 | 1.8462 |
| 0.0533 | 6.1393 | 1234 | 3.3672 | 0.0865 | 3.3672 | 1.8350 |
| 0.0533 | 6.1493 | 1236 | 3.3328 | 0.0639 | 3.3328 | 1.8256 |
| 0.0533 | 6.1592 | 1238 | 3.3093 | 0.0639 | 3.3093 | 1.8192 |
| 0.0533 | 6.1692 | 1240 | 3.3483 | 0.0475 | 3.3483 | 1.8298 |
| 0.0533 | 6.1791 | 1242 | 3.3519 | 0.0475 | 3.3519 | 1.8308 |
| 0.0533 | 6.1891 | 1244 | 3.3817 | 0.0475 | 3.3817 | 1.8390 |
| 0.0533 | 6.1990 | 1246 | 3.4246 | 0.0475 | 3.4246 | 1.8506 |
| 0.0533 | 6.2090 | 1248 | 3.5208 | 0.0702 | 3.5208 | 1.8764 |
| 0.0533 | 6.2189 | 1250 | 3.5580 | 0.0702 | 3.5580 | 1.8863 |
| 0.0533 | 6.2289 | 1252 | 3.5833 | 0.0702 | 3.5833 | 1.8930 |
| 0.0533 | 6.2388 | 1254 | 3.5501 | 0.0889 | 3.5501 | 1.8842 |
| 0.0533 | 6.2488 | 1256 | 3.4838 | 0.0889 | 3.4838 | 1.8665 |
| 0.0533 | 6.2587 | 1258 | 3.4270 | 0.1052 | 3.4270 | 1.8512 |
| 0.0533 | 6.2687 | 1260 | 3.3409 | 0.0865 | 3.3409 | 1.8278 |
| 0.0533 | 6.2786 | 1262 | 3.2784 | 0.0632 | 3.2784 | 1.8106 |
| 0.0533 | 6.2886 | 1264 | 3.2734 | 0.0632 | 3.2734 | 1.8092 |
| 0.0533 | 6.2985 | 1266 | 3.2808 | 0.0632 | 3.2808 | 1.8113 |
| 0.0533 | 6.3085 | 1268 | 3.3299 | 0.0632 | 3.3299 | 1.8248 |
| 0.0533 | 6.3184 | 1270 | 3.4072 | 0.0668 | 3.4072 | 1.8459 |
| 0.0533 | 6.3284 | 1272 | 3.4684 | 0.1021 | 3.4684 | 1.8624 |
| 0.0533 | 6.3383 | 1274 | 3.5677 | 0.1481 | 3.5677 | 1.8888 |
| 0.0533 | 6.3483 | 1276 | 3.6189 | 0.1428 | 3.6189 | 1.9023 |
| 0.0533 | 6.3582 | 1278 | 3.6484 | 0.1529 | 3.6484 | 1.9101 |
| 0.0533 | 6.3682 | 1280 | 3.6447 | 0.1415 | 3.6447 | 1.9091 |
| 0.0533 | 6.3781 | 1282 | 3.6684 | 0.1364 | 3.6684 | 1.9153 |
| 0.0533 | 6.3881 | 1284 | 3.6974 | 0.1465 | 3.6974 | 1.9229 |
| 0.0533 | 6.3980 | 1286 | 3.7566 | 0.1658 | 3.7566 | 1.9382 |
| 0.0533 | 6.4080 | 1288 | 3.8065 | 0.1605 | 3.8065 | 1.9510 |
| 0.0533 | 6.4179 | 1290 | 3.7980 | 0.1520 | 3.7980 | 1.9488 |
| 0.0533 | 6.4279 | 1292 | 3.7837 | 0.1520 | 3.7837 | 1.9452 |
| 0.0533 | 6.4378 | 1294 | 3.6973 | 0.1658 | 3.6973 | 1.9228 |
| 0.0533 | 6.4478 | 1296 | 3.6355 | 0.1658 | 3.6355 | 1.9067 |
| 0.0533 | 6.4577 | 1298 | 3.6328 | 0.1658 | 3.6328 | 1.9060 |
| 0.0533 | 6.4677 | 1300 | 3.6418 | 0.1658 | 3.6418 | 1.9083 |
| 0.0533 | 6.4776 | 1302 | 3.6834 | 0.1750 | 3.6834 | 1.9192 |
| 0.0533 | 6.4876 | 1304 | 3.7742 | 0.1609 | 3.7742 | 1.9427 |
| 0.0533 | 6.4975 | 1306 | 3.7949 | 0.1863 | 3.7949 | 1.9480 |
| 0.0533 | 6.5075 | 1308 | 3.7651 | 0.1781 | 3.7651 | 1.9404 |
| 0.0533 | 6.5174 | 1310 | 3.7548 | 0.1781 | 3.7548 | 1.9377 |
| 0.0533 | 6.5274 | 1312 | 3.7638 | 0.1792 | 3.7638 | 1.9400 |
| 0.0533 | 6.5373 | 1314 | 3.7248 | 0.1781 | 3.7248 | 1.9300 |
| 0.0533 | 6.5473 | 1316 | 3.7124 | 0.1781 | 3.7124 | 1.9268 |
| 0.0533 | 6.5572 | 1318 | 3.7127 | 0.1618 | 3.7127 | 1.9268 |
| 0.0533 | 6.5672 | 1320 | 3.7170 | 0.1609 | 3.7170 | 1.9279 |
| 0.0533 | 6.5771 | 1322 | 3.6726 | 0.1763 | 3.6726 | 1.9164 |
| 0.0533 | 6.5871 | 1324 | 3.6197 | 0.1750 | 3.6197 | 1.9025 |
| 0.0533 | 6.5970 | 1326 | 3.5904 | 0.1553 | 3.5904 | 1.8948 |
| 0.0533 | 6.6070 | 1328 | 3.5027 | 0.1553 | 3.5027 | 1.8716 |
| 0.0533 | 6.6169 | 1330 | 3.4188 | 0.1150 | 3.4188 | 1.8490 |
| 0.0533 | 6.6269 | 1332 | 3.3764 | 0.1150 | 3.3764 | 1.8375 |
| 0.0533 | 6.6368 | 1334 | 3.3546 | 0.1103 | 3.3546 | 1.8315 |
| 0.0533 | 6.6468 | 1336 | 3.3839 | 0.1150 | 3.3839 | 1.8395 |
| 0.0533 | 6.6567 | 1338 | 3.4604 | 0.1150 | 3.4604 | 1.8602 |
| 0.0533 | 6.6667 | 1340 | 3.5627 | 0.1259 | 3.5627 | 1.8875 |
| 0.0533 | 6.6766 | 1342 | 3.6754 | 0.1553 | 3.6754 | 1.9171 |
| 0.0533 | 6.6866 | 1344 | 3.7762 | 0.1605 | 3.7762 | 1.9432 |
| 0.0533 | 6.6965 | 1346 | 3.8528 | 0.1809 | 3.8528 | 1.9628 |
| 0.0533 | 6.7065 | 1348 | 3.8635 | 0.1809 | 3.8635 | 1.9656 |
| 0.0533 | 6.7164 | 1350 | 3.8156 | 0.1462 | 3.8156 | 1.9533 |
| 0.0533 | 6.7264 | 1352 | 3.7557 | 0.1367 | 3.7557 | 1.9380 |
| 0.0533 | 6.7363 | 1354 | 3.6582 | 0.1512 | 3.6582 | 1.9126 |
| 0.0533 | 6.7463 | 1356 | 3.5889 | 0.1501 | 3.5889 | 1.8944 |
| 0.0533 | 6.7562 | 1358 | 3.4984 | 0.1428 | 3.4984 | 1.8704 |
| 0.0533 | 6.7662 | 1360 | 3.4510 | 0.1323 | 3.4510 | 1.8577 |
| 0.0533 | 6.7761 | 1362 | 3.4183 | 0.1296 | 3.4183 | 1.8489 |
| 0.0533 | 6.7861 | 1364 | 3.4123 | 0.1455 | 3.4123 | 1.8472 |
| 0.0533 | 6.7960 | 1366 | 3.4427 | 0.1296 | 3.4427 | 1.8554 |
| 0.0533 | 6.8060 | 1368 | 3.4536 | 0.1323 | 3.4536 | 1.8584 |
| 0.0533 | 6.8159 | 1370 | 3.4476 | 0.1323 | 3.4476 | 1.8568 |
| 0.0533 | 6.8259 | 1372 | 3.4752 | 0.1168 | 3.4752 | 1.8642 |
| 0.0533 | 6.8358 | 1374 | 3.5413 | 0.1377 | 3.5413 | 1.8818 |
| 0.0533 | 6.8458 | 1376 | 3.6240 | 0.1315 | 3.6240 | 1.9037 |
| 0.0533 | 6.8557 | 1378 | 3.7491 | 0.1354 | 3.7491 | 1.9363 |
| 0.0533 | 6.8657 | 1380 | 3.8194 | 0.1354 | 3.8194 | 1.9543 |
| 0.0533 | 6.8756 | 1382 | 3.8391 | 0.1354 | 3.8391 | 1.9594 |
| 0.0533 | 6.8856 | 1384 | 3.7942 | 0.0954 | 3.7942 | 1.9479 |
| 0.0533 | 6.8955 | 1386 | 3.7070 | 0.0842 | 3.7070 | 1.9254 |
| 0.0533 | 6.9055 | 1388 | 3.6059 | 0.0787 | 3.6059 | 1.8989 |
| 0.0533 | 6.9154 | 1390 | 3.5348 | 0.0787 | 3.5348 | 1.8801 |
| 0.0533 | 6.9254 | 1392 | 3.4566 | 0.0470 | 3.4566 | 1.8592 |
| 0.0533 | 6.9353 | 1394 | 3.4455 | 0.0337 | 3.4455 | 1.8562 |
| 0.0533 | 6.9453 | 1396 | 3.4451 | 0.0337 | 3.4451 | 1.8561 |
| 0.0533 | 6.9552 | 1398 | 3.4537 | 0.0408 | 3.4537 | 1.8584 |
| 0.0533 | 6.9652 | 1400 | 3.5159 | 0.0787 | 3.5159 | 1.8751 |
| 0.0533 | 6.9751 | 1402 | 3.6171 | 0.0995 | 3.6171 | 1.9019 |
| 0.0533 | 6.9851 | 1404 | 3.7264 | 0.0995 | 3.7264 | 1.9304 |
| 0.0533 | 6.9950 | 1406 | 3.8127 | 0.1105 | 3.8127 | 1.9526 |
| 0.0533 | 7.0050 | 1408 | 3.8450 | 0.1403 | 3.8450 | 1.9609 |
| 0.0533 | 7.0149 | 1410 | 3.8502 | 0.1403 | 3.8502 | 1.9622 |
| 0.0533 | 7.0249 | 1412 | 3.8767 | 0.1403 | 3.8767 | 1.9689 |
| 0.0533 | 7.0348 | 1414 | 3.8988 | 0.1512 | 3.8988 | 1.9745 |
| 0.0533 | 7.0448 | 1416 | 3.8677 | 0.1512 | 3.8677 | 1.9666 |
| 0.0533 | 7.0547 | 1418 | 3.8417 | 0.1696 | 3.8417 | 1.9600 |
| 0.0533 | 7.0647 | 1420 | 3.8215 | 0.1696 | 3.8215 | 1.9549 |
| 0.0533 | 7.0746 | 1422 | 3.7621 | 0.1605 | 3.7621 | 1.9396 |
| 0.0533 | 7.0846 | 1424 | 3.6750 | 0.1512 | 3.6750 | 1.9170 |
| 0.0533 | 7.0945 | 1426 | 3.5888 | 0.1403 | 3.5888 | 1.8944 |
| 0.0533 | 7.1045 | 1428 | 3.4626 | 0.1269 | 3.4626 | 1.8608 |
| 0.0533 | 7.1144 | 1430 | 3.3717 | 0.1269 | 3.3717 | 1.8362 |
| 0.0533 | 7.1244 | 1432 | 3.3628 | 0.1269 | 3.3628 | 1.8338 |
| 0.0533 | 7.1343 | 1434 | 3.4075 | 0.1269 | 3.4075 | 1.8459 |
| 0.0533 | 7.1443 | 1436 | 3.5048 | 0.1168 | 3.5048 | 1.8721 |
| 0.0533 | 7.1542 | 1438 | 3.6135 | 0.1105 | 3.6135 | 1.9009 |
| 0.0533 | 7.1642 | 1440 | 3.6890 | 0.1212 | 3.6890 | 1.9207 |
| 0.0533 | 7.1741 | 1442 | 3.7200 | 0.1212 | 3.7200 | 1.9287 |
| 0.0533 | 7.1841 | 1444 | 3.7295 | 0.1212 | 3.7295 | 1.9312 |
| 0.0533 | 7.1940 | 1446 | 3.7428 | 0.1212 | 3.7428 | 1.9346 |
| 0.0533 | 7.2040 | 1448 | 3.7521 | 0.1212 | 3.7521 | 1.9370 |
| 0.0533 | 7.2139 | 1450 | 3.7027 | 0.1212 | 3.7027 | 1.9242 |
| 0.0533 | 7.2239 | 1452 | 3.6432 | 0.1212 | 3.6432 | 1.9087 |
| 0.0533 | 7.2338 | 1454 | 3.6385 | 0.1274 | 3.6385 | 1.9075 |
| 0.0533 | 7.2438 | 1456 | 3.6784 | 0.1501 | 3.6784 | 1.9179 |
| 0.0533 | 7.2537 | 1458 | 3.7138 | 0.1512 | 3.7138 | 1.9271 |
| 0.0533 | 7.2637 | 1460 | 3.7149 | 0.1512 | 3.7149 | 1.9274 |
| 0.0533 | 7.2736 | 1462 | 3.7322 | 0.1512 | 3.7322 | 1.9319 |
| 0.0533 | 7.2836 | 1464 | 3.7434 | 0.1512 | 3.7434 | 1.9348 |
| 0.0533 | 7.2935 | 1466 | 3.7483 | 0.1367 | 3.7483 | 1.9360 |
| 0.0533 | 7.3035 | 1468 | 3.8014 | 0.1367 | 3.8014 | 1.9497 |
| 0.0533 | 7.3134 | 1470 | 3.8577 | 0.1244 | 3.8577 | 1.9641 |
| 0.0533 | 7.3234 | 1472 | 3.9216 | 0.1426 | 3.9216 | 1.9803 |
| 0.0533 | 7.3333 | 1474 | 3.9734 | 0.1437 | 3.9734 | 1.9933 |
| 0.0533 | 7.3433 | 1476 | 3.9752 | 0.1372 | 3.9752 | 1.9938 |
| 0.0533 | 7.3532 | 1478 | 3.9645 | 0.1532 | 3.9645 | 1.9911 |
| 0.0533 | 7.3632 | 1480 | 3.9404 | 0.1532 | 3.9404 | 1.9851 |
| 0.0533 | 7.3731 | 1482 | 3.9099 | 0.1532 | 3.9099 | 1.9773 |
| 0.0533 | 7.3831 | 1484 | 3.8834 | 0.1532 | 3.8834 | 1.9706 |
| 0.0533 | 7.3930 | 1486 | 3.8073 | 0.1244 | 3.8073 | 1.9512 |
| 0.0533 | 7.4030 | 1488 | 3.7208 | 0.1439 | 3.7208 | 1.9289 |
| 0.0533 | 7.4129 | 1490 | 3.6371 | 0.1650 | 3.6371 | 1.9071 |
| 0.0533 | 7.4229 | 1492 | 3.5903 | 0.1465 | 3.5903 | 1.8948 |
| 0.0533 | 7.4328 | 1494 | 3.5586 | 0.1465 | 3.5586 | 1.8864 |
| 0.0533 | 7.4428 | 1496 | 3.4866 | 0.1259 | 3.4866 | 1.8672 |
| 0.0533 | 7.4527 | 1498 | 3.4224 | 0.1215 | 3.4224 | 1.8500 |
| 0.0409 | 7.4627 | 1500 | 3.3863 | 0.1157 | 3.3863 | 1.8402 |
| 0.0409 | 7.4726 | 1502 | 3.3813 | 0.1157 | 3.3813 | 1.8388 |
| 0.0409 | 7.4826 | 1504 | 3.3860 | 0.1007 | 3.3860 | 1.8401 |
| 0.0409 | 7.4925 | 1506 | 3.3834 | 0.0792 | 3.3834 | 1.8394 |
| 0.0409 | 7.5025 | 1508 | 3.3713 | 0.0792 | 3.3713 | 1.8361 |
| 0.0409 | 7.5124 | 1510 | 3.3835 | 0.0792 | 3.3835 | 1.8394 |
| 0.0409 | 7.5224 | 1512 | 3.4209 | 0.0729 | 3.4209 | 1.8496 |
| 0.0409 | 7.5323 | 1514 | 3.4698 | 0.0729 | 3.4698 | 1.8627 |
| 0.0409 | 7.5423 | 1516 | 3.5634 | 0.0943 | 3.5634 | 1.8877 |
| 0.0409 | 7.5522 | 1518 | 3.6849 | 0.0842 | 3.6849 | 1.9196 |
| 0.0409 | 7.5622 | 1520 | 3.7875 | 0.1168 | 3.7875 | 1.9462 |
| 0.0409 | 7.5721 | 1522 | 3.8783 | 0.1184 | 3.8783 | 1.9693 |
| 0.0409 | 7.5821 | 1524 | 3.9252 | 0.1148 | 3.9252 | 1.9812 |
| 0.0409 | 7.5920 | 1526 | 3.9373 | 0.1148 | 3.9373 | 1.9843 |
| 0.0409 | 7.6020 | 1528 | 3.9030 | 0.1148 | 3.9030 | 1.9756 |
| 0.0409 | 7.6119 | 1530 | 3.8631 | 0.1297 | 3.8631 | 1.9655 |
| 0.0409 | 7.6219 | 1532 | 3.8334 | 0.1512 | 3.8334 | 1.9579 |
| 0.0409 | 7.6318 | 1534 | 3.7794 | 0.1315 | 3.7794 | 1.9441 |
| 0.0409 | 7.6418 | 1536 | 3.6986 | 0.1212 | 3.6986 | 1.9232 |
| 0.0409 | 7.6517 | 1538 | 3.6267 | 0.1212 | 3.6267 | 1.9044 |
| 0.0409 | 7.6617 | 1540 | 3.5801 | 0.1105 | 3.5801 | 1.8921 |
| 0.0409 | 7.6716 | 1542 | 3.5779 | 0.1105 | 3.5779 | 1.8915 |
| 0.0409 | 7.6816 | 1544 | 3.5777 | 0.1105 | 3.5777 | 1.8915 |
| 0.0409 | 7.6915 | 1546 | 3.5870 | 0.1105 | 3.5870 | 1.8939 |
| 0.0409 | 7.7015 | 1548 | 3.6183 | 0.1212 | 3.6183 | 1.9022 |
| 0.0409 | 7.7114 | 1550 | 3.6365 | 0.1212 | 3.6365 | 1.9070 |
| 0.0409 | 7.7214 | 1552 | 3.6706 | 0.1212 | 3.6706 | 1.9159 |
| 0.0409 | 7.7313 | 1554 | 3.7306 | 0.1315 | 3.7306 | 1.9315 |
| 0.0409 | 7.7413 | 1556 | 3.7450 | 0.1501 | 3.7450 | 1.9352 |
| 0.0409 | 7.7512 | 1558 | 3.7499 | 0.1439 | 3.7499 | 1.9365 |
| 0.0409 | 7.7612 | 1560 | 3.7632 | 0.1381 | 3.7632 | 1.9399 |
| 0.0409 | 7.7711 | 1562 | 3.7734 | 0.1381 | 3.7734 | 1.9425 |
| 0.0409 | 7.7811 | 1564 | 3.7778 | 0.1313 | 3.7778 | 1.9436 |
| 0.0409 | 7.7910 | 1566 | 3.7931 | 0.1326 | 3.7931 | 1.9476 |
| 0.0409 | 7.8010 | 1568 | 3.7636 | 0.1313 | 3.7636 | 1.9400 |
| 0.0409 | 7.8109 | 1570 | 3.7318 | 0.1326 | 3.7318 | 1.9318 |
| 0.0409 | 7.8209 | 1572 | 3.7364 | 0.1326 | 3.7364 | 1.9330 |
| 0.0409 | 7.8308 | 1574 | 3.7157 | 0.1313 | 3.7157 | 1.9276 |
| 0.0409 | 7.8408 | 1576 | 3.6691 | 0.1368 | 3.6691 | 1.9155 |
| 0.0409 | 7.8507 | 1578 | 3.6282 | 0.1439 | 3.6282 | 1.9048 |
| 0.0409 | 7.8607 | 1580 | 3.5810 | 0.1157 | 3.5810 | 1.8924 |
| 0.0409 | 7.8706 | 1582 | 3.5646 | 0.1315 | 3.5646 | 1.8880 |
| 0.0409 | 7.8806 | 1584 | 3.5392 | 0.1315 | 3.5392 | 1.8813 |
| 0.0409 | 7.8905 | 1586 | 3.5699 | 0.1315 | 3.5699 | 1.8894 |
| 0.0409 | 7.9005 | 1588 | 3.6070 | 0.1315 | 3.6070 | 1.8992 |
| 0.0409 | 7.9104 | 1590 | 3.6267 | 0.1315 | 3.6267 | 1.9044 |
| 0.0409 | 7.9204 | 1592 | 3.6391 | 0.1315 | 3.6391 | 1.9076 |
| 0.0409 | 7.9303 | 1594 | 3.6397 | 0.1315 | 3.6397 | 1.9078 |
| 0.0409 | 7.9403 | 1596 | 3.6188 | 0.1315 | 3.6188 | 1.9023 |
| 0.0409 | 7.9502 | 1598 | 3.6042 | 0.1315 | 3.6042 | 1.8985 |
| 0.0409 | 7.9602 | 1600 | 3.5915 | 0.1315 | 3.5915 | 1.8951 |
| 0.0409 | 7.9701 | 1602 | 3.5879 | 0.1315 | 3.5879 | 1.8942 |
| 0.0409 | 7.9801 | 1604 | 3.5849 | 0.1315 | 3.5849 | 1.8934 |
| 0.0409 | 7.9900 | 1606 | 3.5872 | 0.1315 | 3.5872 | 1.8940 |
| 0.0409 | 8.0 | 1608 | 3.5692 | 0.1315 | 3.5692 | 1.8892 |
| 0.0409 | 8.0100 | 1610 | 3.5818 | 0.1315 | 3.5818 | 1.8926 |
| 0.0409 | 8.0199 | 1612 | 3.6126 | 0.1315 | 3.6126 | 1.9007 |
| 0.0409 | 8.0299 | 1614 | 3.6323 | 0.1315 | 3.6323 | 1.9059 |
| 0.0409 | 8.0398 | 1616 | 3.6804 | 0.1157 | 3.6804 | 1.9184 |
| 0.0409 | 8.0498 | 1618 | 3.6892 | 0.1157 | 3.6892 | 1.9207 |
| 0.0409 | 8.0597 | 1620 | 3.6789 | 0.1157 | 3.6789 | 1.9180 |
| 0.0409 | 8.0697 | 1622 | 3.6625 | 0.1315 | 3.6625 | 1.9138 |
| 0.0409 | 8.0796 | 1624 | 3.6438 | 0.1315 | 3.6438 | 1.9089 |
| 0.0409 | 8.0896 | 1626 | 3.6235 | 0.1315 | 3.6235 | 1.9035 |
| 0.0409 | 8.0995 | 1628 | 3.6300 | 0.1315 | 3.6300 | 1.9052 |
| 0.0409 | 8.1095 | 1630 | 3.6324 | 0.1315 | 3.6324 | 1.9059 |
| 0.0409 | 8.1194 | 1632 | 3.6345 | 0.1315 | 3.6345 | 1.9064 |
| 0.0409 | 8.1294 | 1634 | 3.6286 | 0.1315 | 3.6286 | 1.9049 |
| 0.0409 | 8.1393 | 1636 | 3.6300 | 0.1315 | 3.6300 | 1.9052 |
| 0.0409 | 8.1493 | 1638 | 3.6441 | 0.1315 | 3.6441 | 1.9089 |
| 0.0409 | 8.1592 | 1640 | 3.6233 | 0.1315 | 3.6233 | 1.9035 |
| 0.0409 | 8.1692 | 1642 | 3.5892 | 0.1465 | 3.5892 | 1.8945 |
| 0.0409 | 8.1791 | 1644 | 3.5754 | 0.1465 | 3.5754 | 1.8909 |
| 0.0409 | 8.1891 | 1646 | 3.5713 | 0.1465 | 3.5713 | 1.8898 |
| 0.0409 | 8.1990 | 1648 | 3.5635 | 0.1465 | 3.5635 | 1.8877 |
| 0.0409 | 8.2090 | 1650 | 3.5309 | 0.1465 | 3.5309 | 1.8791 |
| 0.0409 | 8.2189 | 1652 | 3.5007 | 0.1465 | 3.5007 | 1.8710 |
| 0.0409 | 8.2289 | 1654 | 3.5091 | 0.1465 | 3.5091 | 1.8733 |
| 0.0409 | 8.2388 | 1656 | 3.5255 | 0.1465 | 3.5255 | 1.8776 |
| 0.0409 | 8.2488 | 1658 | 3.5452 | 0.1465 | 3.5452 | 1.8829 |
| 0.0409 | 8.2587 | 1660 | 3.5690 | 0.1465 | 3.5690 | 1.8892 |
| 0.0409 | 8.2687 | 1662 | 3.5815 | 0.1465 | 3.5815 | 1.8925 |
| 0.0409 | 8.2786 | 1664 | 3.6037 | 0.1465 | 3.6037 | 1.8983 |
| 0.0409 | 8.2886 | 1666 | 3.6422 | 0.1304 | 3.6422 | 1.9085 |
| 0.0409 | 8.2985 | 1668 | 3.6347 | 0.1304 | 3.6347 | 1.9065 |
| 0.0409 | 8.3085 | 1670 | 3.6043 | 0.1465 | 3.6043 | 1.8985 |
| 0.0409 | 8.3184 | 1672 | 3.5948 | 0.1465 | 3.5948 | 1.8960 |
| 0.0409 | 8.3284 | 1674 | 3.6075 | 0.1465 | 3.6075 | 1.8993 |
| 0.0409 | 8.3383 | 1676 | 3.6047 | 0.1465 | 3.6047 | 1.8986 |
| 0.0409 | 8.3483 | 1678 | 3.5977 | 0.1465 | 3.5977 | 1.8968 |
| 0.0409 | 8.3582 | 1680 | 3.5745 | 0.1465 | 3.5745 | 1.8906 |
| 0.0409 | 8.3682 | 1682 | 3.5770 | 0.1315 | 3.5770 | 1.8913 |
| 0.0409 | 8.3781 | 1684 | 3.5968 | 0.1315 | 3.5968 | 1.8965 |
| 0.0409 | 8.3881 | 1686 | 3.6148 | 0.1315 | 3.6148 | 1.9013 |
| 0.0409 | 8.3980 | 1688 | 3.6299 | 0.1315 | 3.6299 | 1.9052 |
| 0.0409 | 8.4080 | 1690 | 3.6500 | 0.1315 | 3.6500 | 1.9105 |
| 0.0409 | 8.4179 | 1692 | 3.6830 | 0.1315 | 3.6830 | 1.9191 |
| 0.0409 | 8.4279 | 1694 | 3.7038 | 0.1315 | 3.7038 | 1.9245 |
| 0.0409 | 8.4378 | 1696 | 3.7151 | 0.1315 | 3.7151 | 1.9275 |
| 0.0409 | 8.4478 | 1698 | 3.7445 | 0.1354 | 3.7445 | 1.9351 |
| 0.0409 | 8.4577 | 1700 | 3.7780 | 0.1257 | 3.7780 | 1.9437 |
| 0.0409 | 8.4677 | 1702 | 3.8057 | 0.1348 | 3.8057 | 1.9508 |
| 0.0409 | 8.4776 | 1704 | 3.8186 | 0.1348 | 3.8186 | 1.9541 |
| 0.0409 | 8.4876 | 1706 | 3.7971 | 0.1348 | 3.7971 | 1.9486 |
| 0.0409 | 8.4975 | 1708 | 3.7497 | 0.1164 | 3.7497 | 1.9364 |
| 0.0409 | 8.5075 | 1710 | 3.7210 | 0.1168 | 3.7210 | 1.9290 |
| 0.0409 | 8.5174 | 1712 | 3.6956 | 0.1168 | 3.6956 | 1.9224 |
| 0.0409 | 8.5274 | 1714 | 3.6815 | 0.1168 | 3.6815 | 1.9187 |
| 0.0409 | 8.5373 | 1716 | 3.6540 | 0.1168 | 3.6540 | 1.9116 |
| 0.0409 | 8.5473 | 1718 | 3.6105 | 0.1465 | 3.6105 | 1.9001 |
| 0.0409 | 8.5572 | 1720 | 3.5763 | 0.1465 | 3.5763 | 1.8911 |
| 0.0409 | 8.5672 | 1722 | 3.5297 | 0.1364 | 3.5297 | 1.8787 |
| 0.0409 | 8.5771 | 1724 | 3.4637 | 0.1259 | 3.4637 | 1.8611 |
| 0.0409 | 8.5871 | 1726 | 3.4156 | 0.1259 | 3.4156 | 1.8481 |
| 0.0409 | 8.5970 | 1728 | 3.3936 | 0.1259 | 3.3936 | 1.8422 |
| 0.0409 | 8.6070 | 1730 | 3.3728 | 0.1259 | 3.3728 | 1.8365 |
| 0.0409 | 8.6169 | 1732 | 3.3656 | 0.1259 | 3.3656 | 1.8346 |
| 0.0409 | 8.6269 | 1734 | 3.3620 | 0.1058 | 3.3620 | 1.8336 |
| 0.0409 | 8.6368 | 1736 | 3.3636 | 0.1058 | 3.3636 | 1.8340 |
| 0.0409 | 8.6468 | 1738 | 3.3879 | 0.1259 | 3.3879 | 1.8406 |
| 0.0409 | 8.6567 | 1740 | 3.3835 | 0.1259 | 3.3835 | 1.8394 |
| 0.0409 | 8.6667 | 1742 | 3.3951 | 0.1259 | 3.3951 | 1.8426 |
| 0.0409 | 8.6766 | 1744 | 3.4321 | 0.1259 | 3.4321 | 1.8526 |
| 0.0409 | 8.6866 | 1746 | 3.4649 | 0.1364 | 3.4649 | 1.8614 |
| 0.0409 | 8.6965 | 1748 | 3.4832 | 0.1364 | 3.4832 | 1.8663 |
| 0.0409 | 8.7065 | 1750 | 3.4803 | 0.1364 | 3.4803 | 1.8656 |
| 0.0409 | 8.7164 | 1752 | 3.4725 | 0.1518 | 3.4725 | 1.8635 |
| 0.0409 | 8.7264 | 1754 | 3.4597 | 0.1518 | 3.4597 | 1.8600 |
| 0.0409 | 8.7363 | 1756 | 3.4542 | 0.1518 | 3.4542 | 1.8585 |
| 0.0409 | 8.7463 | 1758 | 3.4452 | 0.1518 | 3.4452 | 1.8561 |
| 0.0409 | 8.7562 | 1760 | 3.4371 | 0.1518 | 3.4371 | 1.8539 |
| 0.0409 | 8.7662 | 1762 | 3.4178 | 0.1518 | 3.4178 | 1.8487 |
| 0.0409 | 8.7761 | 1764 | 3.3917 | 0.1415 | 3.3917 | 1.8416 |
| 0.0409 | 8.7861 | 1766 | 3.3676 | 0.1415 | 3.3676 | 1.8351 |
| 0.0409 | 8.7960 | 1768 | 3.3713 | 0.1415 | 3.3713 | 1.8361 |
| 0.0409 | 8.8060 | 1770 | 3.3923 | 0.1415 | 3.3923 | 1.8418 |
| 0.0409 | 8.8159 | 1772 | 3.4096 | 0.1415 | 3.4096 | 1.8465 |
| 0.0409 | 8.8259 | 1774 | 3.4188 | 0.1415 | 3.4188 | 1.8490 |
| 0.0409 | 8.8358 | 1776 | 3.4434 | 0.1415 | 3.4434 | 1.8556 |
| 0.0409 | 8.8458 | 1778 | 3.4483 | 0.1415 | 3.4483 | 1.8570 |
| 0.0409 | 8.8557 | 1780 | 3.4502 | 0.1415 | 3.4502 | 1.8575 |
| 0.0409 | 8.8657 | 1782 | 3.4659 | 0.1259 | 3.4659 | 1.8617 |
| 0.0409 | 8.8756 | 1784 | 3.4609 | 0.1259 | 3.4609 | 1.8604 |
| 0.0409 | 8.8856 | 1786 | 3.4424 | 0.1259 | 3.4424 | 1.8554 |
| 0.0409 | 8.8955 | 1788 | 3.4412 | 0.1259 | 3.4412 | 1.8550 |
| 0.0409 | 8.9055 | 1790 | 3.4424 | 0.1259 | 3.4424 | 1.8554 |
| 0.0409 | 8.9154 | 1792 | 3.4420 | 0.1259 | 3.4420 | 1.8553 |
| 0.0409 | 8.9254 | 1794 | 3.4386 | 0.1259 | 3.4386 | 1.8544 |
| 0.0409 | 8.9353 | 1796 | 3.4399 | 0.1259 | 3.4399 | 1.8547 |
| 0.0409 | 8.9453 | 1798 | 3.4631 | 0.1259 | 3.4631 | 1.8609 |
| 0.0409 | 8.9552 | 1800 | 3.4794 | 0.1105 | 3.4794 | 1.8653 |
| 0.0409 | 8.9652 | 1802 | 3.4966 | 0.1105 | 3.4966 | 1.8699 |
| 0.0409 | 8.9751 | 1804 | 3.5186 | 0.1105 | 3.5186 | 1.8758 |
| 0.0409 | 8.9851 | 1806 | 3.5370 | 0.1212 | 3.5370 | 1.8807 |
| 0.0409 | 8.9950 | 1808 | 3.5348 | 0.1212 | 3.5348 | 1.8801 |
| 0.0409 | 9.0050 | 1810 | 3.5289 | 0.1212 | 3.5289 | 1.8785 |
| 0.0409 | 9.0149 | 1812 | 3.5066 | 0.1105 | 3.5066 | 1.8726 |
| 0.0409 | 9.0249 | 1814 | 3.4909 | 0.1105 | 3.4909 | 1.8684 |
| 0.0409 | 9.0348 | 1816 | 3.4713 | 0.1105 | 3.4713 | 1.8631 |
| 0.0409 | 9.0448 | 1818 | 3.4617 | 0.1105 | 3.4617 | 1.8606 |
| 0.0409 | 9.0547 | 1820 | 3.4546 | 0.1105 | 3.4546 | 1.8587 |
| 0.0409 | 9.0647 | 1822 | 3.4539 | 0.1105 | 3.4539 | 1.8585 |
| 0.0409 | 9.0746 | 1824 | 3.4568 | 0.1105 | 3.4568 | 1.8593 |
| 0.0409 | 9.0846 | 1826 | 3.4666 | 0.1105 | 3.4666 | 1.8619 |
| 0.0409 | 9.0945 | 1828 | 3.4703 | 0.1105 | 3.4703 | 1.8629 |
| 0.0409 | 9.1045 | 1830 | 3.4665 | 0.1105 | 3.4665 | 1.8619 |
| 0.0409 | 9.1144 | 1832 | 3.4600 | 0.1105 | 3.4600 | 1.8601 |
| 0.0409 | 9.1244 | 1834 | 3.4680 | 0.1105 | 3.4680 | 1.8623 |
| 0.0409 | 9.1343 | 1836 | 3.4695 | 0.1105 | 3.4695 | 1.8627 |
| 0.0409 | 9.1443 | 1838 | 3.4765 | 0.1105 | 3.4765 | 1.8645 |
| 0.0409 | 9.1542 | 1840 | 3.4740 | 0.1105 | 3.4740 | 1.8639 |
| 0.0409 | 9.1642 | 1842 | 3.4848 | 0.1212 | 3.4848 | 1.8668 |
| 0.0409 | 9.1741 | 1844 | 3.5046 | 0.1212 | 3.5046 | 1.8721 |
| 0.0409 | 9.1841 | 1846 | 3.5296 | 0.1315 | 3.5296 | 1.8787 |
| 0.0409 | 9.1940 | 1848 | 3.5535 | 0.1315 | 3.5535 | 1.8851 |
| 0.0409 | 9.2040 | 1850 | 3.5734 | 0.1315 | 3.5734 | 1.8903 |
| 0.0409 | 9.2139 | 1852 | 3.5951 | 0.1315 | 3.5951 | 1.8961 |
| 0.0409 | 9.2239 | 1854 | 3.6209 | 0.1315 | 3.6209 | 1.9029 |
| 0.0409 | 9.2338 | 1856 | 3.6443 | 0.1315 | 3.6443 | 1.9090 |
| 0.0409 | 9.2438 | 1858 | 3.6535 | 0.1315 | 3.6535 | 1.9114 |
| 0.0409 | 9.2537 | 1860 | 3.6458 | 0.1315 | 3.6458 | 1.9094 |
| 0.0409 | 9.2637 | 1862 | 3.6355 | 0.1315 | 3.6355 | 1.9067 |
| 0.0409 | 9.2736 | 1864 | 3.6143 | 0.1315 | 3.6143 | 1.9011 |
| 0.0409 | 9.2836 | 1866 | 3.5944 | 0.1315 | 3.5944 | 1.8959 |
| 0.0409 | 9.2935 | 1868 | 3.5842 | 0.1315 | 3.5842 | 1.8932 |
| 0.0409 | 9.3035 | 1870 | 3.5858 | 0.1315 | 3.5858 | 1.8936 |
| 0.0409 | 9.3134 | 1872 | 3.5850 | 0.1315 | 3.5850 | 1.8934 |
| 0.0409 | 9.3234 | 1874 | 3.5724 | 0.1315 | 3.5724 | 1.8901 |
| 0.0409 | 9.3333 | 1876 | 3.5552 | 0.1315 | 3.5552 | 1.8855 |
| 0.0409 | 9.3433 | 1878 | 3.5504 | 0.1315 | 3.5504 | 1.8842 |
| 0.0409 | 9.3532 | 1880 | 3.5601 | 0.1315 | 3.5601 | 1.8868 |
| 0.0409 | 9.3632 | 1882 | 3.5677 | 0.1315 | 3.5676 | 1.8888 |
| 0.0409 | 9.3731 | 1884 | 3.5641 | 0.1315 | 3.5641 | 1.8879 |
| 0.0409 | 9.3831 | 1886 | 3.5571 | 0.1315 | 3.5571 | 1.8860 |
| 0.0409 | 9.3930 | 1888 | 3.5419 | 0.1315 | 3.5419 | 1.8820 |
| 0.0409 | 9.4030 | 1890 | 3.5238 | 0.1315 | 3.5238 | 1.8772 |
| 0.0409 | 9.4129 | 1892 | 3.5129 | 0.1315 | 3.5129 | 1.8743 |
| 0.0409 | 9.4229 | 1894 | 3.5060 | 0.1315 | 3.5060 | 1.8724 |
| 0.0409 | 9.4328 | 1896 | 3.5002 | 0.1315 | 3.5002 | 1.8709 |
| 0.0409 | 9.4428 | 1898 | 3.4872 | 0.1465 | 3.4872 | 1.8674 |
| 0.0409 | 9.4527 | 1900 | 3.4701 | 0.1364 | 3.4701 | 1.8628 |
| 0.0409 | 9.4627 | 1902 | 3.4665 | 0.1364 | 3.4665 | 1.8619 |
| 0.0409 | 9.4726 | 1904 | 3.4709 | 0.1364 | 3.4709 | 1.8630 |
| 0.0409 | 9.4826 | 1906 | 3.4742 | 0.1364 | 3.4742 | 1.8639 |
| 0.0409 | 9.4925 | 1908 | 3.4766 | 0.1465 | 3.4766 | 1.8646 |
| 0.0409 | 9.5025 | 1910 | 3.4784 | 0.1465 | 3.4784 | 1.8650 |
| 0.0409 | 9.5124 | 1912 | 3.4843 | 0.1465 | 3.4843 | 1.8666 |
| 0.0409 | 9.5224 | 1914 | 3.4890 | 0.1465 | 3.4890 | 1.8679 |
| 0.0409 | 9.5323 | 1916 | 3.4929 | 0.1465 | 3.4929 | 1.8689 |
| 0.0409 | 9.5423 | 1918 | 3.5048 | 0.1465 | 3.5048 | 1.8721 |
| 0.0409 | 9.5522 | 1920 | 3.5179 | 0.1465 | 3.5179 | 1.8756 |
| 0.0409 | 9.5622 | 1922 | 3.5342 | 0.1465 | 3.5342 | 1.8799 |
| 0.0409 | 9.5721 | 1924 | 3.5489 | 0.1315 | 3.5489 | 1.8838 |
| 0.0409 | 9.5821 | 1926 | 3.5589 | 0.1315 | 3.5589 | 1.8865 |
| 0.0409 | 9.5920 | 1928 | 3.5610 | 0.1315 | 3.5610 | 1.8871 |
| 0.0409 | 9.6020 | 1930 | 3.5523 | 0.1315 | 3.5523 | 1.8848 |
| 0.0409 | 9.6119 | 1932 | 3.5379 | 0.1465 | 3.5379 | 1.8809 |
| 0.0409 | 9.6219 | 1934 | 3.5212 | 0.1465 | 3.5212 | 1.8765 |
| 0.0409 | 9.6318 | 1936 | 3.5115 | 0.1618 | 3.5115 | 1.8739 |
| 0.0409 | 9.6418 | 1938 | 3.5094 | 0.1618 | 3.5094 | 1.8733 |
| 0.0409 | 9.6517 | 1940 | 3.5114 | 0.1618 | 3.5114 | 1.8739 |
| 0.0409 | 9.6617 | 1942 | 3.5136 | 0.1618 | 3.5136 | 1.8745 |
| 0.0409 | 9.6716 | 1944 | 3.5165 | 0.1618 | 3.5165 | 1.8752 |
| 0.0409 | 9.6816 | 1946 | 3.5220 | 0.1465 | 3.5220 | 1.8767 |
| 0.0409 | 9.6915 | 1948 | 3.5254 | 0.1465 | 3.5254 | 1.8776 |
| 0.0409 | 9.7015 | 1950 | 3.5287 | 0.1465 | 3.5287 | 1.8785 |
| 0.0409 | 9.7114 | 1952 | 3.5382 | 0.1315 | 3.5382 | 1.8810 |
| 0.0409 | 9.7214 | 1954 | 3.5456 | 0.1315 | 3.5456 | 1.8830 |
| 0.0409 | 9.7313 | 1956 | 3.5474 | 0.1315 | 3.5474 | 1.8834 |
| 0.0409 | 9.7413 | 1958 | 3.5475 | 0.1315 | 3.5475 | 1.8835 |
| 0.0409 | 9.7512 | 1960 | 3.5471 | 0.1315 | 3.5471 | 1.8834 |
| 0.0409 | 9.7612 | 1962 | 3.5455 | 0.1315 | 3.5455 | 1.8829 |
| 0.0409 | 9.7711 | 1964 | 3.5443 | 0.1315 | 3.5443 | 1.8826 |
| 0.0409 | 9.7811 | 1966 | 3.5387 | 0.1315 | 3.5387 | 1.8811 |
| 0.0409 | 9.7910 | 1968 | 3.5305 | 0.1315 | 3.5305 | 1.8790 |
| 0.0409 | 9.8010 | 1970 | 3.5239 | 0.1315 | 3.5239 | 1.8772 |
| 0.0409 | 9.8109 | 1972 | 3.5186 | 0.1465 | 3.5186 | 1.8758 |
| 0.0409 | 9.8209 | 1974 | 3.5184 | 0.1465 | 3.5184 | 1.8757 |
| 0.0409 | 9.8308 | 1976 | 3.5182 | 0.1465 | 3.5182 | 1.8757 |
| 0.0409 | 9.8408 | 1978 | 3.5190 | 0.1465 | 3.5190 | 1.8759 |
| 0.0409 | 9.8507 | 1980 | 3.5194 | 0.1465 | 3.5194 | 1.8760 |
| 0.0409 | 9.8607 | 1982 | 3.5205 | 0.1465 | 3.5205 | 1.8763 |
| 0.0409 | 9.8706 | 1984 | 3.5209 | 0.1465 | 3.5209 | 1.8764 |
| 0.0409 | 9.8806 | 1986 | 3.5230 | 0.1465 | 3.5230 | 1.8770 |
| 0.0409 | 9.8905 | 1988 | 3.5234 | 0.1465 | 3.5234 | 1.8771 |
| 0.0409 | 9.9005 | 1990 | 3.5249 | 0.1465 | 3.5249 | 1.8775 |
| 0.0409 | 9.9104 | 1992 | 3.5261 | 0.1465 | 3.5261 | 1.8778 |
| 0.0409 | 9.9204 | 1994 | 3.5261 | 0.1465 | 3.5261 | 1.8778 |
| 0.0409 | 9.9303 | 1996 | 3.5250 | 0.1465 | 3.5250 | 1.8775 |
| 0.0409 | 9.9403 | 1998 | 3.5246 | 0.1465 | 3.5246 | 1.8774 |
| 0.0365 | 9.9502 | 2000 | 3.5243 | 0.1465 | 3.5243 | 1.8773 |
| 0.0365 | 9.9602 | 2002 | 3.5238 | 0.1465 | 3.5238 | 1.8772 |
| 0.0365 | 9.9701 | 2004 | 3.5236 | 0.1465 | 3.5236 | 1.8771 |
| 0.0365 | 9.9801 | 2006 | 3.5233 | 0.1465 | 3.5233 | 1.8770 |
| 0.0365 | 9.9900 | 2008 | 3.5232 | 0.1465 | 3.5232 | 1.8770 |
| 0.0365 | 10.0 | 2010 | 3.5232 | 0.1465 | 3.5232 | 1.8770 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
PrunaAI/google-shieldgemma-2b-QUANTO-float8bit-smashed | PrunaAI | "2024-08-16T19:30:04Z" | 6 | 0 | null | [
"pruna-ai",
"base_model:google/shieldgemma-2b",
"base_model:finetune:google/shieldgemma-2b",
"region:us"
] | null | "2024-08-16T19:26:48Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: google/shieldgemma-2b
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with quanto.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo google/shieldgemma-2b installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install quanto
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
IMPORTS
model = AutoModelForCausalLM.from_pretrained("PrunaAI/google-shieldgemma-2b-QUANTO-float8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("google/shieldgemma-2b")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model google/shieldgemma-2b before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
cria111/distilbert-base-uncased-no-perturb | cria111 | "2024-04-29T11:10:43Z" | 3 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-04-29T10:57:44Z" | ---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-no-perturb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-no-perturb
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1515
- Precision: 0.4338
- Recall: 0.4111
- F1: 0.4222
- Accuracy: 0.9627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 103 | 0.1867 | 0.2194 | 0.1794 | 0.1974 | 0.9505 |
| No log | 2.0 | 206 | 0.1554 | 0.3708 | 0.3714 | 0.3711 | 0.9596 |
| No log | 3.0 | 309 | 0.1515 | 0.4338 | 0.4111 | 0.4222 | 0.9627 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.0+cpu
- Datasets 2.18.0
- Tokenizers 0.15.2
|
rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa-131a12-d23573-4be015-1bbc3e | rithwik-db | "2023-05-03T02:57:59Z" | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2023-05-03T02:57:53Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa-131a12-d23573-4be015-1bbc3e
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa-131a12-d23573-4be015-1bbc3e')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa-131a12-d23573-4be015-1bbc3e')
model = AutoModel.from_pretrained('rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa-131a12-d23573-4be015-1bbc3e')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=rithwik-db/e5-base-unsupervised-pseudo-gpl-fiqa-131a12-d23573-4be015-1bbc3e)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 7200 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
pankajmlai/SmolLM2-FT-DPO | pankajmlai | "2024-12-25T21:11:55Z" | 146 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"smol-course",
"module_1",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:pankajmlai/SmolLM2-FT-MyDataset",
"base_model:finetune:pankajmlai/SmolLM2-FT-MyDataset",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-25T21:11:26Z" | ---
base_model: pankajmlai/SmolLM2-FT-MyDataset
library_name: transformers
model_name: SmolLM2-FT-DPO
tags:
- generated_from_trainer
- smol-course
- module_1
- trl
- dpo
licence: license
---
# Model Card for SmolLM2-FT-DPO
This model is a fine-tuned version of [pankajmlai/SmolLM2-FT-MyDataset](https://huggingface.co/pankajmlai/SmolLM2-FT-MyDataset).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="pankajmlai/SmolLM2-FT-DPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/mountaintree-none/huggingface/runs/oamxza5y)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.13.0
- Transformers: 4.47.1
- Pytorch: 2.5.1+cu121
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Snim/dqn-SpaceInvadersNoFrameskip-v4 | Snim | "2023-02-08T19:25:49Z" | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-02-08T19:25:04Z" | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 753.50 +/- 272.14
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Snim -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Snim -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Snim
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
nvidia/stt_en_conformer_transducer_large | nvidia | "2025-02-27T13:09:58Z" | 16 | 7 | nemo | [
"nemo",
"automatic-speech-recognition",
"speech",
"audio",
"Transducer",
"Conformer",
"Transformer",
"pytorch",
"NeMo",
"hf-asr-leaderboard",
"en",
"dataset:librispeech_asr",
"dataset:fisher_corpus",
"dataset:Switchboard-1",
"dataset:WSJ-0",
"dataset:WSJ-1",
"dataset:National-Singapore-Corpus-Part-1",
"dataset:National-Singapore-Corpus-Part-6",
"dataset:vctk",
"dataset:VoxPopuli",
"dataset:Europarl-ASR",
"dataset:Multilingual-LibriSpeech",
"dataset:mozilla-foundation/common_voice_8_0",
"dataset:MLCommons/peoples_speech",
"arxiv:2005.08100",
"license:cc-by-4.0",
"model-index",
"region:us"
] | automatic-speech-recognition | "2022-09-22T18:40:30Z" | ---
language:
- en
library_name: nemo
datasets:
- librispeech_asr
- fisher_corpus
- Switchboard-1
- WSJ-0
- WSJ-1
- National-Singapore-Corpus-Part-1
- National-Singapore-Corpus-Part-6
- vctk
- VoxPopuli
- Europarl-ASR
- Multilingual-LibriSpeech
- mozilla-foundation/common_voice_8_0
- MLCommons/peoples_speech
thumbnail: null
tags:
- automatic-speech-recognition
- speech
- audio
- Transducer
- Conformer
- Transformer
- pytorch
- NeMo
- hf-asr-leaderboard
license: cc-by-4.0
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: stt_en_conformer_transducer_large
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 1.7
- task:
type: Automatic Speech Recognition
name: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 3.7
- task:
type: Automatic Speech Recognition
name: automatic-speech-recognition
dataset:
name: Multilingual LibriSpeech
type: facebook/multilingual_librispeech
config: english
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 5.8
- task:
type: Automatic Speech Recognition
name: automatic-speech-recognition
dataset:
name: Mozilla Common Voice 8.0
type: mozilla-foundation/common_voice_8_0
config: en
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 7.8
- task:
type: Automatic Speech Recognition
name: automatic-speech-recognition
dataset:
name: Wall Street Journal 92
type: wsj_0
args:
language: en
metrics:
- name: Test WER
type: wer
value: 1.5
- task:
type: Automatic Speech Recognition
name: automatic-speech-recognition
dataset:
name: Wall Street Journal 93
type: wsj_1
args:
language: en
metrics:
- name: Test WER
type: wer
value: 2.1
- task:
type: Automatic Speech Recognition
name: automatic-speech-recognition
dataset:
name: National Singapore Corpus
type: nsc_part_1
args:
language: en
metrics:
- name: Test WER
type: wer
value: 5.9
---
# NVIDIA Conformer-Transducer Large (en-US)
<style>
img {
display: inline;
}
</style>
| [](#model-architecture)
| [](#model-architecture)
| [](#datasets)
This model transcribes speech in lower case English alphabet along with spaces and apostrophes.
It is a large version of Conformer-Transducer (around 120M parameters) model.
See the [model architecture](#model-architecture) section and [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#conformer-transducer) for complete architecture details.
## NVIDIA NeMo: Training
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest Pytorch version.
```
pip install nemo_toolkit['all']
```
## How to Use this Model
The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
### Automatically instantiate the model
```python
import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.EncDecRNNTBPEModel.from_pretrained("nvidia/stt_en_conformer_transducer_large")
```
### Transcribing using Python
First, let's get a sample
```
wget https://dldata-public.s3.us-east-2.amazonaws.com/2086-149220-0033.wav
```
Then simply do:
```
output = asr_model.transcribe(['2086-149220-0033.wav'])
print(output[0].text)
```
### Transcribing many audio files
```shell
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py
pretrained_name="nvidia/stt_en_conformer_transducer_large"
audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
```
### Input
This model accepts 16000 KHz Mono-channel Audio (wav files) as input.
### Output
This model provides transcribed speech as a string for a given audio sample.
## Model Architecture
Conformer-Transducer model is an autoregressive variant of Conformer model [1] for Automatic Speech Recognition which uses Transducer loss/decoding instead of CTC Loss. You may find more info on the detail of this model here: [Conformer-Transducer Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html).
## Training
The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_transducer/speech_to_text_rnnt_bpe.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/conformer/conformer_transducer_bpe.yaml).
The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py).
### Datasets
All the models in this collection are trained on a composite dataset (NeMo ASRSET) comprising of several thousand hours of English speech:
- Librispeech 960 hours of English speech
- Fisher Corpus
- Switchboard-1 Dataset
- WSJ-0 and WSJ-1
- National Speech Corpus (Part 1, Part 6)
- VCTK
- VoxPopuli (EN)
- Europarl-ASR (EN)
- Multilingual Librispeech (MLS EN) - 2,000 hrs subset
- Mozilla Common Voice (v8.0)
- People's Speech - 12,000 hrs subset
Note: older versions of the model may have trained on smaller set of datasets.
## Performance
The list of the available models in this collection is shown in the following table. Performances of the ASR models are reported in terms of Word Error Rate (WER%) with greedy decoding.
| Version | Tokenizer | Vocabulary Size | LS test-other | LS test-clean | WSJ Eval92 | WSJ Dev93 | NSC Part 1 | MLS Test | MCV Test 6.1 | MCV Test 8.0 | Train Dataset |
|---------|-----------------------|-----------------|---------------|---------------|------------|-----------|-----|-------|------|----|------|
| 1.10.0 | SentencePiece Unigram | 1024 | 3.7 | 1.7 | 1.5 | 2.1 | 5.9 | 5.8 | 6.5 | 7.8 | NeMo ASRSET 3.0 |
## Limitations
Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
## NVIDIA Riva: Deployment
[NVIDIA Riva](https://developer.nvidia.com/riva), is an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, on edge, and embedded.
Additionally, Riva provides:
* World-class out-of-the-box accuracy for the most common languages with model checkpoints trained on proprietary data with hundreds of thousands of GPU-compute hours
* Best in class accuracy with run-time word boosting (e.g., brand and product names) and customization of acoustic model, language model, and inverse text normalization
* Streaming speech recognition, Kubernetes compatible scaling, and enterprise-grade support
Although this model isn’t supported yet by Riva, the [list of supported models is here](https://huggingface.co/models?other=Riva).
Check out [Riva live demo](https://developer.nvidia.com/riva#demos).
## References
[1] [Conformer: Convolution-augmented Transformer for Speech Recognition](https://arxiv.org/abs/2005.08100)
[2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece)
[3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
## Licence
License to use this model is covered by the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/). By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license. |
TestingjustTesting/EyeofModor | TestingjustTesting | "2024-03-17T03:14:57Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-03-17T03:14:02Z" | ---
license: apache-2.0
---
show me a tower with a hundred beams of light entering
|
HealthNLP/pubmedbert_conmod | HealthNLP | "2024-02-15T21:02:40Z" | 98 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract",
"base_model:finetune:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-01-24T22:06:22Z" | ---
license: mit
base_model: microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test
This model is a fine-tuned version of [microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract](https://huggingface.co/microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6886
- Accuracy: 0.8143
- F1: [0.92816572 0.56028369 0.1 0.2633452 ]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------------------------------------:|
| No log | 1.0 | 37 | 0.4891 | 0.8235 | [0.91702786 0.33333333 0. 0.10837438] |
| No log | 2.0 | 74 | 0.4762 | 0.8321 | [0.93139159 0.48466258 0. 0.22857143] |
| No log | 3.0 | 111 | 0.5084 | 0.8208 | [0.92995725 0.44887781 0. 0.19266055] |
| No log | 4.0 | 148 | 0.5519 | 0.8105 | [0.92421691 0.44444444 0.06557377 0.30769231] |
| No log | 5.0 | 185 | 0.5805 | 0.8294 | [0.93531353 0.52336449 0.09345794 0.27131783] |
| No log | 6.0 | 222 | 0.6778 | 0.7955 | [0.91344509 0.55305466 0.15463918 0.29166667] |
| No log | 7.0 | 259 | 0.6407 | 0.8213 | [0.93298292 0.51383399 0.10191083 0.2519084 ] |
| No log | 8.0 | 296 | 0.6639 | 0.8272 | [0.9326288 0.55052265 0.18181818 0.26271186] |
| No log | 9.0 | 333 | 0.6863 | 0.8192 | [0.93071286 0.55830389 0.11042945 0.2761194 ] |
| No log | 10.0 | 370 | 0.6886 | 0.8143 | [0.92816572 0.56028369 0.1 0.2633452 ] |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
SeanLee97/angle-llama-7b-zhnli-v1 | SeanLee97 | "2023-10-29T01:25:27Z" | 0 | 2 | transformers | [
"transformers",
"en",
"dataset:shibing624/nli-zh-all",
"dataset:shibing624/nli_zh",
"arxiv:2309.12871",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2023-10-28T08:22:14Z" | ---
library_name: transformers
license: mit
datasets:
- shibing624/nli-zh-all
- shibing624/nli_zh
language:
- en
metrics:
- spearmanr
---
# AnglE📐: Angle-optimized Text Embeddings
> It is Angle 📐, not Angel 👼.
🔥 A New SOTA Model for Semantic Textual Similarity!
Github: https://github.com/SeanLee97/AnglE
<a href="https://arxiv.org/abs/2309.12871">
<img src="https://img.shields.io/badge/Arxiv-2306.06843-yellow.svg?style=flat-square" alt="https://arxiv.org/abs/2309.12871" />
</a>
[](https://paperswithcode.com/sota/semantic-textual-similarity-on-sick-r-1?p=angle-optimized-text-embeddings)
[](https://paperswithcode.com/sota/semantic-textual-similarity-on-sts16?p=angle-optimized-text-embeddings)
[](https://paperswithcode.com/sota/semantic-textual-similarity-on-sts15?p=angle-optimized-text-embeddings)
[](https://paperswithcode.com/sota/semantic-textual-similarity-on-sts14?p=angle-optimized-text-embeddings)
[](https://paperswithcode.com/sota/semantic-textual-similarity-on-sts13?p=angle-optimized-text-embeddings)
[](https://paperswithcode.com/sota/semantic-textual-similarity-on-sts12?p=angle-optimized-text-embeddings)
[](https://paperswithcode.com/sota/semantic-textual-similarity-on-sts-benchmark?p=angle-optimized-text-embeddings)
**STS Results**
| Model | ATEC | BQ | LCQMC | PAWSX | STS-B | SOHU-dd | SOHU-dc | Avg. |
| ------- |-------|-------|-------|-------|-------|--------------|-----------------|-------|
| ^[shibing624/text2vec-bge-large-chinese](https://huggingface.co/shibing624/text2vec-bge-large-chinese) | 38.41 | 61.34 | 71.72 | 35.15 | 76.44 | 71.81 | 63.15 | 59.72 |
| ^[shibing624/text2vec-base-chinese-paraphrase](https://huggingface.co/shibing624/text2vec-base-chinese-paraphrase) | 44.89 | 63.58 | 74.24 | 40.90 | 78.93 | 76.70 | 63.30 | 63.08 |
| [SeanLee97/angle-roberta-wwm-base-zhnli-v1](https://huggingface.co/SeanLee97/angle-roberta-wwm-base-zhnli-v1) | 49.49 | 72.47 | 78.33 | 59.13 | 77.14 | 72.36 | 60.53 | **67.06** |
| [SeanLee97/angle-llama-7b-zhnli-v1](https://huggingface.co/SeanLee97/angle-llama-7b-zhnli-v1) | 50.44 | 71.95 | 78.90 | 56.57 | 81.11 | 68.11 | 52.02 | 65.59 |
^ denotes baselines, their results are retrieved from https://github.com/shibing624/text2vec
## Usage
```python
from angle_emb import AnglE, Prompts
angle = AnglE.from_pretrained('NousResearch/Llama-2-7b-hf', pretrained_lora_path='SeanLee97/angle-llama-7b-zhnli-v1')
# 请选择对应的 prompt,此模型对应 Prompts.B
print('All predefined prompts:', Prompts.list_prompts())
angle.set_prompt(prompt=Prompts.B)
print('prompt:', angle.prompt)
vec = angle.encode({'text': '你好世界'}, to_numpy=True)
print(vec)
vecs = angle.encode([{'text': '你好世界1'}, {'text': '你好世界2'}], to_numpy=True)
print(vecs)
```
## Citation
You are welcome to use our code and pre-trained models. If you use our code and pre-trained models, please support us by citing our work as follows:
```bibtex
@article{li2023angle,
title={AnglE-Optimized Text Embeddings},
author={Li, Xianming and Li, Jing},
journal={arXiv preprint arXiv:2309.12871},
year={2023}
}
``` |
GiulioRomualdi/ppo-LunarLander-v2 | GiulioRomualdi | "2023-10-03T09:03:02Z" | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-10-03T09:02:15Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 254.70 +/- 17.40
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
FounderOfHuggingface/gpt2_lora_r16_dbpedia_14_t18_e75_non_member_shadow18 | FounderOfHuggingface | "2023-12-09T05:54:27Z" | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | "2023-12-09T05:54:23Z" | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
bugdaryan/WizardCoderSQL-15B-V1.0-QLoRA | bugdaryan | "2023-09-19T13:42:41Z" | 7 | 0 | peft | [
"peft",
"code",
"sql",
"en",
"dataset:bugdaryan/spider-natsql-wikisql-instruct",
"base_model:WizardLMTeam/WizardCoder-15B-V1.0",
"base_model:adapter:WizardLMTeam/WizardCoder-15B-V1.0",
"license:openrail",
"region:us"
] | null | "2023-09-08T21:35:28Z" | ---
language:
- en
license: openrail
library_name: peft
tags:
- code
- sql
datasets:
- bugdaryan/spider-natsql-wikisql-instruct
base_model: WizardLM/WizardCoder-15B-V1.0
---
# LoRA adapters for model WizardCoderSQL
## Overview
- **Model Name**: WizardCoderSQL-15B-V1.0-QLoRA
- **Repository**: [WizardLM/WizardCoder-15B-V1.0](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0)
- **License**: [OpenRAIL-M](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement)
- **Fine-Tuned Model Name**: WizardCoderSQL-15B-V1.0
- **Fine-Tuned Dataset**: [bugdaryan/spider-natsql-wikisql-instruct](https://huggingface.co/datasets/bugdaryan/spider-natsql-wikisql-instruct)
## Description
This repository contains a LoRA fine-tuned version of the Wizard Coder 15B model. The LoRA attention mechanism has been customized with specific parameters to enhance model performance in certain tasks. Additionally, the fine-tuned model has been merged with custom parameters to create a specialized model for specific use cases.
## Model Details
- **Base Model**: Wizard Coder 15B
- **Fine-Tuned Model Name**: WizardCoderSQL-15B-V1.0-QLoRA
- **Fine-Tuning Parameters**:
- QLoRA Parameters:
- LoRA Attention Dimension (lora_r): 64
- LoRA Alpha Parameter (lora_alpha): 16
- LoRA Dropout Probability (lora_dropout): 0.1
- bitsandbytes Parameters:
- Use 4-bit Precision Base Model (use_4bit): True
- Compute Dtype for 4-bit Base Models (bnb_4bit_compute_dtype): float16
- Quantization Type (bnb_4bit_quant_type): nf4
- Activate Nested Quantization (use_nested_quant): False
- TrainingArguments Parameters:
- Number of Training Epochs (num_train_epochs): 1
- Enable FP16/BF16 Training (fp16/bf16): False/True
- Batch Size per GPU for Training (per_device_train_batch_size): 48
- Batch Size per GPU for Evaluation (per_device_eval_batch_size): 4
- Gradient Accumulation Steps (gradient_accumulation_steps): 1
- Enable Gradient Checkpointing (gradient_checkpointing): True
- Maximum Gradient Norm (max_grad_norm): 0.3
- Initial Learning Rate (learning_rate): 2e-4
- Weight Decay (weight_decay): 0.001
- Optimizer (optim): paged_adamw_32bit
- Learning Rate Scheduler Type (lr_scheduler_type): cosine
- Maximum Training Steps (max_steps): -1
- Warmup Ratio (warmup_ratio): 0.03
- Group Sequences into Batches with Same Length (group_by_length): True
- Save Checkpoint Every X Update Steps (save_steps): 0
- Log Every X Update Steps (logging_steps): 25
- SFT Parameters:
- Maximum Sequence Length (max_seq_length): 500
## Usage
To use this fine-tuned LoRA model and merged parameters, you can load it using the Hugging Face Transformers library in Python. Here's an example of how to use it:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
from peft import PeftModel
model_name = 'WizardLM/WizardCoder-15B-V1.0'
adapter_name = 'bugdaryan/WizardCoderSQL-15B-V1.0-QLoRA'
base_model = AutoModelForCausalLM.from_pretrained(model_name, device_map='auto')
model = PeftModel.from_pretrained(base_model, adapter_name)
model = model.merge_and_unload()
tokenizer = AutoTokenizer.from_pretrained(model_name)
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer)
tables = "CREATE TABLE sales ( sale_id number PRIMARY KEY, product_id number, customer_id number, salesperson_id number, sale_date DATE, quantity number, FOREIGN KEY (product_id) REFERENCES products(product_id), FOREIGN KEY (customer_id) REFERENCES customers(customer_id), FOREIGN KEY (salesperson_id) REFERENCES salespeople(salesperson_id)); CREATE TABLE product_suppliers ( supplier_id number PRIMARY KEY, product_id number, supply_price number, FOREIGN KEY (product_id) REFERENCES products(product_id)); CREATE TABLE customers ( customer_id number PRIMARY KEY, name text, address text ); CREATE TABLE salespeople ( salesperson_id number PRIMARY KEY, name text, region text ); CREATE TABLE product_suppliers ( supplier_id number PRIMARY KEY, product_id number, supply_price number );"
question = 'Find the salesperson who made the most sales.'
prompt = f"Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: Convert text to SQLite query: {question} {tables} ### Response:"
ans = pipe(prompt, max_new_tokens=200)
print(ans[0]['generated_text'])
```
## Disclaimer
WizardCoderSQL model follows the same license as WizardCoder. The content produced by any version of WizardCoderSQL is influenced by uncontrollable variables such as randomness, and therefore, the accuracy of the output cannot be guaranteed by this project. This project does not accept any legal liability for the content of the model output, nor does it assume responsibility for any losses incurred due to the use of associated resources and output results. |
cleanrl/KungFuMaster-v5-cleanba_ppo_envpool_machado_atari_wrapper-seed3 | cleanrl | "2023-03-09T23:07:32Z" | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"KungFuMaster-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-09T23:07:30Z" | ---
tags:
- KungFuMaster-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: KungFuMaster-v5
type: KungFuMaster-v5
metrics:
- type: mean_reward
value: 24580.00 +/- 5548.84
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **KungFuMaster-v5**
This is a trained model of a PPO agent playing KungFuMaster-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_machado_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_machado_atari_wrapper --env-id KungFuMaster-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/KungFuMaster-v5-cleanba_ppo_envpool_machado_atari_wrapper-seed3/raw/main/cleanba_ppo_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/KungFuMaster-v5-cleanba_ppo_envpool_machado_atari_wrapper-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/KungFuMaster-v5-cleanba_ppo_envpool_machado_atari_wrapper-seed3/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_machado_atari_wrapper.py --distributed --learner-device-ids 1 2 3 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id KungFuMaster-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'concurrency': True,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'KungFuMaster-v5',
'exp_name': 'cleanba_ppo_envpool_machado_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1',
'gpu:2',
'gpu:3',
'gpu:5',
'gpu:6',
'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3],
'learner_devices': ['gpu:1', 'gpu:2', 'gpu:3'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
messawey/orca_mini_v3_13b | messawey | "2024-03-14T00:38:18Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"psmathur/orca_mini_v3_13b",
"garage-bAInd/Platypus2-13B",
"WizardLM/WizardMath-13B-V1.0",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-12T13:00:40Z" | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- psmathur/orca_mini_v3_13b
- garage-bAInd/Platypus2-13B
- WizardLM/WizardMath-13B-V1.0
---
# psmathur/orca_mini_v3_13b
psmathur/orca_mini_v3_13b is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [psmathur/orca_mini_v3_13b](https://huggingface.co/psmathur/orca_mini_v3_13b)
* [garage-bAInd/Platypus2-13B](https://huggingface.co/garage-bAInd/Platypus2-13B)
* [WizardLM/WizardMath-13B-V1.0](https://huggingface.co/WizardLM/WizardMath-13B-V1.0)
## 🧩 Configuration
```yaml
models:
- model: psmathur/orca_mini_v3_13b
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
- model: garage-bAInd/Platypus2-13B
parameters:
density: 0.5
weight: [0, 0.3, 0.7, 1] # weight gradient
- model: WizardLM/WizardMath-13B-V1.0
parameters:
density: 0.33
weight:
- filter: mlp
value: 0.5
- value: 0
merge_method: ties
base_model: TheBloke/Llama-2-13B-fp16
parameters:
normalize: true
int8_mask: true
dtype: float16``` |
ntsema/wav2vec2-xlsr-53-espeak-cv-ft-mhr3-ntsema-colab | ntsema | "2022-11-12T08:33:31Z" | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:audiofolder",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-11-12T06:56:00Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- audiofolder
metrics:
- wer
model-index:
- name: wav2vec2-xlsr-53-espeak-cv-ft-mhr3-ntsema-colab
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: audiofolder
type: audiofolder
config: default
split: train
args: default
metrics:
- name: Wer
type: wer
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-53-espeak-cv-ft-mhr3-ntsema-colab
This model is a fine-tuned version of [facebook/wav2vec2-xlsr-53-espeak-cv-ft](https://huggingface.co/facebook/wav2vec2-xlsr-53-espeak-cv-ft) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7701
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 5.329 | 5.79 | 400 | 1.3162 | 1.0 |
| 1.5529 | 11.59 | 800 | 0.6968 | 1.0 |
| 0.8373 | 17.39 | 1200 | 0.7345 | 1.0 |
| 0.4959 | 23.19 | 1600 | 0.7296 | 1.0 |
| 0.3207 | 28.98 | 2000 | 0.7701 | 1.0 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
juierror/wav2vec2-large-xls-r-thai-test | juierror | "2022-01-02T14:18:08Z" | 64 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-03-02T23:29:05Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-thai-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-thai-test
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.7728
- eval_wer: 0.9490
- eval_runtime: 678.2819
- eval_samples_per_second: 3.226
- eval_steps_per_second: 0.404
- epoch: 2.56
- step: 600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 5
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Shalazary/ruBert-base-sberquad-0.01-len_3-filtered-negative | Shalazary | "2024-04-16T11:37:30Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:ai-forever/ruBert-base",
"base_model:adapter:ai-forever/ruBert-base",
"license:apache-2.0",
"region:us"
] | null | "2024-04-16T11:37:15Z" | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: ai-forever/ruBert-base
model-index:
- name: ruBert-base-sberquad-0.01-len_3-filtered-negative
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ruBert-base-sberquad-0.01-len_3-filtered-negative
This model is a fine-tuned version of [ai-forever/ruBert-base](https://huggingface.co/ai-forever/ruBert-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 5000
### Training results
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
TOMFORD79/RDFOR79_T7 | TOMFORD79 | "2025-02-25T17:43:20Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-25T17:18:23Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
premsuresh/bart-finetuned-mathqa-decomposition | premsuresh | "2022-11-29T09:45:19Z" | 175 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-11-29T09:26:41Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-finetuned-mathqa-decomposition
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-finetuned-mathqa-decomposition
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
lamarr-llm-development/elbedding-autogptq-int8 | lamarr-llm-development | "2025-02-24T08:24:33Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"gptq",
"region:us"
] | feature-extraction | "2025-02-24T08:21:24Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/Qusaiiii_-_CustomAccountant-gguf | RichardErkhov | "2025-03-19T08:34:54Z" | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | "2025-03-19T08:31:44Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
CustomAccountant - GGUF
- Model creator: https://huggingface.co/Qusaiiii/
- Original model: https://huggingface.co/Qusaiiii/CustomAccountant/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [CustomAccountant.Q2_K.gguf](https://huggingface.co/RichardErkhov/Qusaiiii_-_CustomAccountant-gguf/blob/main/CustomAccountant.Q2_K.gguf) | Q2_K | 0.08GB |
| [CustomAccountant.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Qusaiiii_-_CustomAccountant-gguf/blob/main/CustomAccountant.IQ3_XS.gguf) | IQ3_XS | 0.08GB |
| [CustomAccountant.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Qusaiiii_-_CustomAccountant-gguf/blob/main/CustomAccountant.IQ3_S.gguf) | IQ3_S | 0.08GB |
| [CustomAccountant.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Qusaiiii_-_CustomAccountant-gguf/blob/main/CustomAccountant.Q3_K_S.gguf) | Q3_K_S | 0.08GB |
| [CustomAccountant.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Qusaiiii_-_CustomAccountant-gguf/blob/main/CustomAccountant.IQ3_M.gguf) | IQ3_M | 0.09GB |
| [CustomAccountant.Q3_K.gguf](https://huggingface.co/RichardErkhov/Qusaiiii_-_CustomAccountant-gguf/blob/main/CustomAccountant.Q3_K.gguf) | Q3_K | 0.09GB |
| [CustomAccountant.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Qusaiiii_-_CustomAccountant-gguf/blob/main/CustomAccountant.Q3_K_M.gguf) | Q3_K_M | 0.09GB |
| [CustomAccountant.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Qusaiiii_-_CustomAccountant-gguf/blob/main/CustomAccountant.Q3_K_L.gguf) | Q3_K_L | 0.1GB |
| [CustomAccountant.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Qusaiiii_-_CustomAccountant-gguf/blob/main/CustomAccountant.IQ4_XS.gguf) | IQ4_XS | 0.1GB |
| [CustomAccountant.Q4_0.gguf](https://huggingface.co/RichardErkhov/Qusaiiii_-_CustomAccountant-gguf/blob/main/CustomAccountant.Q4_0.gguf) | Q4_0 | 0.1GB |
| [CustomAccountant.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Qusaiiii_-_CustomAccountant-gguf/blob/main/CustomAccountant.IQ4_NL.gguf) | IQ4_NL | 0.1GB |
| [CustomAccountant.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Qusaiiii_-_CustomAccountant-gguf/blob/main/CustomAccountant.Q4_K_S.gguf) | Q4_K_S | 0.1GB |
| [CustomAccountant.Q4_K.gguf](https://huggingface.co/RichardErkhov/Qusaiiii_-_CustomAccountant-gguf/blob/main/CustomAccountant.Q4_K.gguf) | Q4_K | 0.11GB |
| [CustomAccountant.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Qusaiiii_-_CustomAccountant-gguf/blob/main/CustomAccountant.Q4_K_M.gguf) | Q4_K_M | 0.11GB |
| [CustomAccountant.Q4_1.gguf](https://huggingface.co/RichardErkhov/Qusaiiii_-_CustomAccountant-gguf/blob/main/CustomAccountant.Q4_1.gguf) | Q4_1 | 0.11GB |
| [CustomAccountant.Q5_0.gguf](https://huggingface.co/RichardErkhov/Qusaiiii_-_CustomAccountant-gguf/blob/main/CustomAccountant.Q5_0.gguf) | Q5_0 | 0.11GB |
| [CustomAccountant.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Qusaiiii_-_CustomAccountant-gguf/blob/main/CustomAccountant.Q5_K_S.gguf) | Q5_K_S | 0.11GB |
| [CustomAccountant.Q5_K.gguf](https://huggingface.co/RichardErkhov/Qusaiiii_-_CustomAccountant-gguf/blob/main/CustomAccountant.Q5_K.gguf) | Q5_K | 0.12GB |
| [CustomAccountant.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Qusaiiii_-_CustomAccountant-gguf/blob/main/CustomAccountant.Q5_K_M.gguf) | Q5_K_M | 0.12GB |
| [CustomAccountant.Q5_1.gguf](https://huggingface.co/RichardErkhov/Qusaiiii_-_CustomAccountant-gguf/blob/main/CustomAccountant.Q5_1.gguf) | Q5_1 | 0.12GB |
| [CustomAccountant.Q6_K.gguf](https://huggingface.co/RichardErkhov/Qusaiiii_-_CustomAccountant-gguf/blob/main/CustomAccountant.Q6_K.gguf) | Q6_K | 0.13GB |
| [CustomAccountant.Q8_0.gguf](https://huggingface.co/RichardErkhov/Qusaiiii_-_CustomAccountant-gguf/blob/main/CustomAccountant.Q8_0.gguf) | Q8_0 | 0.17GB |
Original model description:
---
license: apache-2.0
library_name: transformers
tags:
- transformers
- text-generation
- conversational
---
|
RichardErkhov/Salesforce_-_xLAM-7b-r-4bits | RichardErkhov | "2024-11-12T15:57:54Z" | 8 | 0 | null | [
"safetensors",
"mistral",
"arxiv:2409.03215",
"arxiv:2406.18518",
"arxiv:2402.15506",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2024-11-12T15:55:19Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
xLAM-7b-r - bnb 4bits
- Model creator: https://huggingface.co/Salesforce/
- Original model: https://huggingface.co/Salesforce/xLAM-7b-r/
Original model description:
---
extra_gated_heading: Acknowledge to follow corresponding license to access the repository
extra_gated_button_content: Agree and access repository
extra_gated_fields:
First Name: text
Last Name: text
Country: country
Affiliation: text
license: cc-by-nc-4.0
datasets:
- Salesforce/xlam-function-calling-60k
language:
- en
pipeline_tag: text-generation
tags:
- function-calling
- LLM Agent
- tool-use
- mistral
- pytorch
library_name: transformers
---
<p align="center">
<img width="500px" alt="xLAM" src="https://huggingface.co/datasets/jianguozhang/logos/resolve/main/xlam-no-background.png">
</p>
<p align="center">
<a href="https://www.salesforceairesearch.com/projects/xlam-large-action-models">[Homepage]</a> |
<a href="https://arxiv.org/abs/2409.03215">[Paper]</a> |
<a href="https://github.com/SalesforceAIResearch/xLAM">[Github]</a> |
<a href="https://discord.gg/tysWwgZyQ2">[Discord]</a> |
<a href="https://blog.salesforceairesearch.com/large-action-model-ai-agent/">[Blog]</a> |
<a href="https://huggingface.co/spaces/Tonic/Salesforce-Xlam-7b-r">[Community Demo]</a>
</p>
<hr>
Welcome to the xLAM model family! [Large Action Models (LAMs)](https://blog.salesforceairesearch.com/large-action-models/) are advanced large language models designed to enhance decision-making and translate user intentions into executable actions that interact with the world. LAMs autonomously plan and execute tasks to achieve specific goals, serving as the brains of AI agents. They have the potential to automate workflow processes across various domains, making them invaluable for a wide range of applications.
**The model release is exclusively for research purposes. A new and enhanced version of xLAM will soon be available exclusively to customers on our Platform.**
## Table of Contents
- [Model Series](#model-series)
- [Repository Overview](#repository-overview)
- [Benchmark Results](#benchmark-results)
- [Usage](#usage)
- [Basic Usage with Huggingface](#basic-usage-with-huggingface)
- [License](#license)
- [Citation](#citation)
## Model Series
We provide a series of xLAMs in different sizes to cater to various applications, including those optimized for function-calling and general agent applications:
| Model | # Total Params | Context Length | Download Model | Download GGUF files |
|------------------------|----------------|----------------|----------------|----------|
| xLAM-1b-fc-r | 1.35B | 16k | [🤗 Link](https://huggingface.co/Salesforce/xLAM-1b-fc-r) | [🤗 Link](https://huggingface.co/Salesforce/xLAM-1b-fc-r-gguf) |
| xLAM-7b-fc-r | 6.91B | 4k | [🤗 Link](https://huggingface.co/Salesforce/xLAM-7b-fc-r) | [🤗 Link](https://huggingface.co/Salesforce/xLAM-7b-fc-r-gguf) |
| xLAM-7b-r | 7.24B | 32k | [🤗 Link](https://huggingface.co/Salesforce/xLAM-7b-r) | -- |
| xLAM-8x7b-r | 46.7B | 32k | [🤗 Link](https://huggingface.co/Salesforce/xLAM-8x7b-r) | -- |
| xLAM-8x22b-r | 141B | 64k | [🤗 Link](https://huggingface.co/Salesforce/xLAM-8x22b-r) | -- |
For our Function-calling series (more details are included at [here](https://huggingface.co/Salesforce/xLAM-7b-fc-r)), we also provide their quantized [GGUF](https://huggingface.co/docs/hub/en/gguf) files for efficient deployment and execution. GGUF is a file format designed to efficiently store and load large language models, making GGUF ideal for running AI models on local devices with limited resources, enabling offline functionality and enhanced privacy.
For more details, check our [GitHub](https://github.com/SalesforceAIResearch/xLAM) and [paper]().
## Repository Overview
This repository is about the general tool use series. For more specialized function calling models, please take a look into our `fc` series [here](https://huggingface.co/Salesforce/xLAM-7b-fc-r).
The instructions will guide you through the setup, usage, and integration of our model series with HuggingFace.
### Framework Versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
## Usage
### Basic Usage with Huggingface
To use the model from Huggingface, please first install the `transformers` library:
```bash
pip install transformers>=4.41.0
```
Please note that, our model works best with our provided prompt format.
It allows us to extract JSON output that is similar to the [function-calling mode of ChatGPT](https://platform.openai.com/docs/guides/function-calling).
We use the following example to illustrate how to use our model for 1) single-turn use case, and 2) multi-turn use case
#### 1. Single-turn use case
````python
import json
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
torch.random.manual_seed(0)
model_name = "Salesforce/xLAM-7b-r"
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Please use our provided instruction prompt for best performance
task_instruction = """
Based on the previous context and API request history, generate an API request or a response as an AI assistant.""".strip()
format_instruction = """
The output should be of the JSON format, which specifies a list of generated function calls. The example format is as follows, please make sure the parameter type is correct. If no function call is needed, please make
tool_calls an empty list "[]".
```
{"thought": "the thought process, or an empty string", "tool_calls": [{"name": "api_name1", "arguments": {"argument1": "value1", "argument2": "value2"}}]}
```
""".strip()
# Define the input query and available tools
query = "What's the weather like in New York in fahrenheit?"
get_weather_api = {
"name": "get_weather",
"description": "Get the current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, New York"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The unit of temperature to return"
}
},
"required": ["location"]
}
}
search_api = {
"name": "search",
"description": "Search for information on the internet",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The search query, e.g. 'latest news on AI'"
}
},
"required": ["query"]
}
}
openai_format_tools = [get_weather_api, search_api]
# Helper function to convert openai format tools to our more concise xLAM format
def convert_to_xlam_tool(tools):
''''''
if isinstance(tools, dict):
return {
"name": tools["name"],
"description": tools["description"],
"parameters": {k: v for k, v in tools["parameters"].get("properties", {}).items()}
}
elif isinstance(tools, list):
return [convert_to_xlam_tool(tool) for tool in tools]
else:
return tools
def build_conversation_history_prompt(conversation_history: str):
parsed_history = []
for step_data in conversation_history:
parsed_history.append({
"step_id": step_data["step_id"],
"thought": step_data["thought"],
"tool_calls": step_data["tool_calls"],
"next_observation": step_data["next_observation"],
"user_input": step_data['user_input']
})
history_string = json.dumps(parsed_history)
return f"\n[BEGIN OF HISTORY STEPS]\n{history_string}\n[END OF HISTORY STEPS]\n"
# Helper function to build the input prompt for our model
def build_prompt(task_instruction: str, format_instruction: str, tools: list, query: str, conversation_history: list):
prompt = f"[BEGIN OF TASK INSTRUCTION]\n{task_instruction}\n[END OF TASK INSTRUCTION]\n\n"
prompt += f"[BEGIN OF AVAILABLE TOOLS]\n{json.dumps(xlam_format_tools)}\n[END OF AVAILABLE TOOLS]\n\n"
prompt += f"[BEGIN OF FORMAT INSTRUCTION]\n{format_instruction}\n[END OF FORMAT INSTRUCTION]\n\n"
prompt += f"[BEGIN OF QUERY]\n{query}\n[END OF QUERY]\n\n"
if len(conversation_history) > 0: prompt += build_conversation_history_prompt(conversation_history)
return prompt
# Build the input and start the inference
xlam_format_tools = convert_to_xlam_tool(openai_format_tools)
conversation_history = []
content = build_prompt(task_instruction, format_instruction, xlam_format_tools, query, conversation_history)
messages=[
{ 'role': 'user', 'content': content}
]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
# tokenizer.eos_token_id is the id of <|EOT|> token
outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
agent_action = tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)
````
Then you should be able to see the following output string in JSON format:
```shell
{"thought": "I need to get the current weather for New York in fahrenheit.", "tool_calls": [{"name": "get_weather", "arguments": {"location": "New York", "unit": "fahrenheit"}}]}
```
#### 2. Multi-turn use case
We also support multi-turn interaction with our model series. Here is the example of next round of interaction from the above example:
````python
def parse_agent_action(agent_action: str):
"""
Given an agent's action, parse it to add to conversation history
"""
try: parsed_agent_action_json = json.loads(agent_action)
except: return "", []
if "thought" not in parsed_agent_action_json.keys(): thought = ""
else: thought = parsed_agent_action_json["thought"]
if "tool_calls" not in parsed_agent_action_json.keys(): tool_calls = []
else: tool_calls = parsed_agent_action_json["tool_calls"]
return thought, tool_calls
def update_conversation_history(conversation_history: list, agent_action: str, environment_response: str, user_input: str):
"""
Update the conversation history list based on the new agent_action, environment_response, and/or user_input
"""
thought, tool_calls = parse_agent_action(agent_action)
new_step_data = {
"step_id": len(conversation_history) + 1,
"thought": thought,
"tool_calls": tool_calls,
"step_id": len(conversation_history),
"next_observation": environment_response,
"user_input": user_input,
}
conversation_history.append(new_step_data)
def get_environment_response(agent_action: str):
"""
Get the environment response for the agent_action
"""
# TODO: add custom implementation here
error_message, response_message = "", ""
return {"error": error_message, "response": response_message}
# ------------- before here are the steps to get agent_response from the example above ----------
# 1. get the next state after agent's response:
# The next 2 lines are examples of getting environment response and user_input.
# It is depended on particular usage, we can have either one or both of those.
environment_response = get_environment_response(agent_action)
user_input = "Now, search on the Internet for cute puppies"
# 2. after we got environment_response and (or) user_input, we want to add to our conversation history
update_conversation_history(conversation_history, agent_action, environment_response, user_input)
# 3. we now can build the prompt
content = build_prompt(task_instruction, format_instruction, xlam_format_tools, query, conversation_history)
# 4. Now, we just retrieve the inputs for the LLM
messages=[
{ 'role': 'user', 'content': content}
]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
# 5. Generate the outputs & decode
# tokenizer.eos_token_id is the id of <|EOT|> token
outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
agent_action = tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)
````
This would be the corresponding output:
```shell
{"thought": "I need to get the current weather for New York in fahrenheit.", "tool_calls": [{"name": "get_weather", "arguments": {"location": "New York", "unit": "fahrenheit"}}]}
```
We highly recommend to use our provided prompt format and helper functions to yield the best function-calling performance of our model.
#### Example multi-turn prompt and output
Prompt:
````json
[BEGIN OF TASK INSTRUCTION]
Based on the previous context and API request history, generate an API request or a response as an AI assistant.
[END OF TASK INSTRUCTION]
[BEGIN OF AVAILABLE TOOLS]
[
{
"name": "get_fire_info",
"description": "Query the latest wildfire information",
"parameters": {
"location": {
"type": "string",
"description": "Location of the wildfire, for example: 'California'",
"required": true,
"format": "free"
},
"radius": {
"type": "number",
"description": "The radius (in miles) around the location where the wildfire is occurring, for example: 10",
"required": false,
"format": "free"
}
}
},
{
"name": "get_hurricane_info",
"description": "Query the latest hurricane information",
"parameters": {
"name": {
"type": "string",
"description": "Name of the hurricane, for example: 'Irma'",
"required": true,
"format": "free"
}
}
},
{
"name": "get_earthquake_info",
"description": "Query the latest earthquake information",
"parameters": {
"magnitude": {
"type": "number",
"description": "The minimum magnitude of the earthquake that needs to be queried.",
"required": false,
"format": "free"
},
"location": {
"type": "string",
"description": "Location of the earthquake, for example: 'California'",
"required": false,
"format": "free"
}
}
}
]
[END OF AVAILABLE TOOLS]
[BEGIN OF FORMAT INSTRUCTION]
Your output should be in the JSON format, which specifies a list of function calls. The example format is as follows. Please make sure the parameter type is correct. If no function call is needed, please make tool_calls an empty list '[]'.
```{"thought": "the thought process, or an empty string", "tool_calls": [{"name": "api_name1", "arguments": {"argument1": "value1", "argument2": "value2"}}]}```
[END OF FORMAT INSTRUCTION]
[BEGIN OF QUERY]
User: Can you give me the latest information on the wildfires occurring in California?
[END OF QUERY]
[BEGIN OF HISTORY STEPS]
[
{
"thought": "Sure, what is the radius (in miles) around the location of the wildfire?",
"tool_calls": [],
"step_id": 1,
"next_observation": "",
"user_input": "User: Let me think... 50 miles."
},
{
"thought": "",
"tool_calls": [
{
"name": "get_fire_info",
"arguments": {
"location": "California",
"radius": 50
}
}
],
"step_id": 2,
"next_observation": [
{
"location": "Los Angeles",
"acres_burned": 1500,
"status": "contained"
},
{
"location": "San Diego",
"acres_burned": 12000,
"status": "active"
}
]
},
{
"thought": "Based on the latest information, there are wildfires in Los Angeles and San Diego. The wildfire in Los Angeles has burned 1,500 acres and is contained, while the wildfire in San Diego has burned 12,000 acres and is still active.",
"tool_calls": [],
"step_id": 3,
"next_observation": "",
"user_input": "User: Can you tell me about the latest earthquake?"
}
]
[END OF HISTORY STEPS]
````
Output:
````json
{"thought": "", "tool_calls": [{"name": "get_earthquake_info", "arguments": {"location": "California"}}]}
````
## Benchmark Results
Note: **Bold** and <u>Underline</u> results denote the best result and the second best result for Success Rate, respectively.
### Berkeley Function-Calling Leaderboard (BFCL)

*Table 1: Performance comparison on BFCL-v2 leaderboard (cutoff date 09/03/2024). The rank is based on the overall accuracy, which is a weighted average of different evaluation categories. "FC" stands for function-calling mode in contrast to using a customized "prompt" to extract the function calls.*
### Webshop and ToolQuery

*Table 2: Testing results on Webshop and ToolQuery. Bold and Underline results denote the best result and the second best result for Success Rate, respectively.*
### Unified ToolQuery

*Table 3: Testing results on ToolQuery-Unified. Bold and Underline results denote the best result and the second best result for Success Rate, respectively. Values in brackets indicate corresponding performance on ToolQuery*
### ToolBench

*Table 4: Pass Rate on ToolBench on three distinct scenarios. Bold and Underline results denote the best result and the second best result for each setting, respectively. The results for xLAM-8x22b-r are unavailable due to the ToolBench server being down between 07/28/2024 and our evaluation cutoff date 09/03/2024.*
## License
The model is distributed under the CC-BY-NC-4.0 license.
## Citation
If you find this repo helpful, please consider to cite our papers:
```bibtex
@article{zhang2024xlam,
title={xLAM: A Family of Large Action Models to Empower AI Agent Systems},
author={Zhang, Jianguo and Lan, Tian and Zhu, Ming and Liu, Zuxin and Hoang, Thai and Kokane, Shirley and Yao, Weiran and Tan, Juntao and Prabhakar, Akshara and Chen, Haolin and others},
journal={arXiv preprint arXiv:2409.03215},
year={2024}
}
```
```bibtex
@article{liu2024apigen,
title={Apigen: Automated pipeline for generating verifiable and diverse function-calling datasets},
author={Liu, Zuxin and Hoang, Thai and Zhang, Jianguo and Zhu, Ming and Lan, Tian and Kokane, Shirley and Tan, Juntao and Yao, Weiran and Liu, Zhiwei and Feng, Yihao and others},
journal={arXiv preprint arXiv:2406.18518},
year={2024}
}
```
```bibtex
@article{zhang2024agentohana,
title={AgentOhana: Design Unified Data and Training Pipeline for Effective Agent Learning},
author={Zhang, Jianguo and Lan, Tian and Murthy, Rithesh and Liu, Zhiwei and Yao, Weiran and Tan, Juntao and Hoang, Thai and Yang, Liangwei and Feng, Yihao and Liu, Zuxin and others},
journal={arXiv preprint arXiv:2402.15506},
year={2024}
}
```
|
JeremiahZ/roberta-base-mrpc | JeremiahZ | "2023-09-24T22:17:46Z" | 121 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-06-13T13:38:44Z" | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
base_model: roberta-base
model-index:
- name: roberta-base-mrpc
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- type: accuracy
value: 0.9019607843137255
name: Accuracy
- type: f1
value: 0.9295774647887324
name: F1
- task:
type: natural-language-inference
name: Natural Language Inference
dataset:
name: glue
type: glue
config: mrpc
split: validation
metrics:
- type: accuracy
value: 0.9019607843137255
name: Accuracy
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTgxMmY3ZTkyZmYyZTJhZjQzNzkxYWRhMzRkNjQ4MDU3NmRhNzJmNDUwMmI5NWQyYTQ1ODRmMGVhOGI3NzMxZCIsInZlcnNpb24iOjF9.E6AhJwh_S4LfzhJjvlUzGWDmJYzxwbzL0IKqIIiNhFGg-_N5G9_VJAgqiQz-6i9xGHB2fJM-G5XinjHRk4SeBA
- type: precision
value: 0.9134948096885813
name: Precision
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiY2NmZThjNDI0YThmMzE4MjdhNjM3OTFmYzAwNzY4ZTM4ZDc4ZDA3NTYzYWRhNTdlNWMyZWI1NTMwZmFhNzQ5NyIsInZlcnNpb24iOjF9.nOkbqzXVD3r9LrIePn7o9Ny8_GiPoSBskCx3ey3Hrexrx00Gj6B9wkVvc8EcV5bAsBTeAJSeqO7ncS_-WJjlCQ
- type: recall
value: 0.946236559139785
name: Recall
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzA2NDgzYTkzMTY4ZDQxYTdlZmM2ODY4YzM4N2E0ODk0YzRkNDI3YTFhMGIwNDZhNTI0MmIyNGU0YmFlMzRjYyIsInZlcnNpb24iOjF9.jNL0IQk6XnUd6zFfHwTSL41Ax35OdoE8xQA-2PqEFs9UtT2O9fo6cZyXDln6QPMGHOlwNgPp_PX6mLrmDHN6Cw
- type: auc
value: 0.9536411880747964
name: AUC
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYmE0ZWZlNGFkMzdhNTdjZjY0NDkzNDZhOTJmY2Q1MWU4MTc3NGMwYmRjNTlkMTZjOTBiNjIwOTUzZWZhZTcwNSIsInZlcnNpb24iOjF9.ZVekwshvwAi8K6gYJmKEDk8riyiOqDhsfzbSxXa-AWKvREksbNtsDo_u6iOEYImGLbcEFfgesDE-cBnEsmMdAg
- type: f1
value: 0.9295774647887324
name: F1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDQwMmE1Y2FhMGE4M2Q5YjU3NTAyZTljZWQ5ODRkMGEyZmI4M2FhNDJjYjlkMzllMzU5NDQ1ZWI2YjNiNmM0OCIsInZlcnNpb24iOjF9.a2jDnaSZhCJ_3f1rBJ8mXfyLCRR6Y9tYb_Hayi00NPWrejDML8Bc-LoobxlPdbd8x8LVJ2vOWhbH5LP4J9kOBg
- type: loss
value: 0.48942330479621887
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODFkMWQ5NTQ0ODMwNjQ2MzcyODA1ODlhZGUzNTg4NjE2M2U5MmIzYjQ3NzgxNTQyZDkyMGNiM2ZhYzc4ZGY0MSIsInZlcnNpb24iOjF9.K6fAIi21ZNtOqKS5c9jlO7kXISNHb0DD4pzdgLsESVjjOYxqS4C9f_OBJjIV-KtuwQGbi3yNC5Y4jTWk2HvNCQ
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mrpc
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4898
- Accuracy: 0.9020
- F1: 0.9296
- Combined Score: 0.9158
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Issacwong/ppo-SnowballTargetTESTCOLAB | Issacwong | "2023-04-13T14:31:39Z" | 6 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | "2023-04-13T14:31:18Z" | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: Issacwong/ppo-SnowballTargetTESTCOLAB
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
HydraLM/Nous-Hermes-llama-2-7b_7b_cluster030_partitioned_v3_standardized_030 | HydraLM | "2023-08-06T22:55:06Z" | 1 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-08-02T17:53:43Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
crodri/ca_bsc_core_trf | crodri | "2023-01-05T09:59:26Z" | 10 | 0 | spacy | [
"spacy",
"token-classification",
"ca",
"license:mit",
"model-index",
"region:us"
] | token-classification | "2023-01-05T09:47:07Z" | ---
tags:
- spacy
- token-classification
language:
- ca
license: mit
model-index:
- name: ca_bsc_core_trf
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.8993650794
- name: NER Recall
type: recall
value: 0.8959519292
- name: NER F Score
type: f_score
value: 0.8976552598
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: accuracy
value: 0.9751561894
- task:
name: POS
type: token-classification
metrics:
- name: POS (UPOS) Accuracy
type: accuracy
value: 0.9923557547
- task:
name: MORPH
type: token-classification
metrics:
- name: Morph (UFeats) Accuracy
type: accuracy
value: 0.9896239098
- task:
name: LEMMA
type: token-classification
metrics:
- name: Lemma Accuracy
type: accuracy
value: 0.9648130959
- task:
name: UNLABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Unlabeled Attachment Score (UAS)
type: f_score
value: 0.9525272994
- task:
name: LABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Labeled Attachment Score (LAS)
type: f_score
value: 0.934621442
- task:
name: SENTS
type: token-classification
metrics:
- name: Sentences F-Score
type: f_score
value: 0.9973676514
---
testing new lemma dictionaries
pip install https://huggingface.co/crodri/ca_bsc_core_trf/resolve/main/ca_bsc_core_trf-any-py3-none-any.whl
| Feature | Description |
| --- | --- |
| **Name** | `ca_bsc_core_trf` |
| **Version** | `3.4.5` |
| **spaCy** | `>=3.4.1,<3.5.0` |
| **Default Pipeline** | `transformer`, `tagger`, `morphologizer`, `lemmatizer`, `parser`, `ner` |
| **Components** | `transformer`, `tagger`, `morphologizer`, `lemmatizer`, `parser`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | `mit` |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (600 labels for 4 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `ADJ`, `ADP`, `ADV`, `AUX`, `CCONJ`, `DET`, `INTJ`, `NOUN`, `NUM`, `PART`, `PRON`, `PROPN`, `PUNCT`, `SCONJ`, `SYM`, `VERB`, `ao0cs0`, `ao0fp0`, `ao0fs0`, `ao0mp0`, `ao0ms0`, `aq0cn0`, `aq0cp0`, `aq0cp00`, `aq0cs0`, `aq0fp0`, `aq0fpp`, `aq0fs0`, `aq0fsp`, `aq0mp0`, `aq0mpp`, `aq0ms0`, `aq0msp`, `cc`, `cs`, `da0cs0`, `da0fp0`, `da0fs0`, `da0mp0`, `da0ms0`, `dd0cp0`, `dd0cs0`, `dd0fp0`, `dd0fs0`, `dd0mp0`, `dd0ms0`, `de0cn0`, `di0cn0`, `di0cp0`, `di0cs0`, `di0fp0`, `di0fs0`, `di0mp0`, `di0ms0`, `dn0cp0`, `dn0cs0`, `dn0fp0`, `dn0fs0`, `dn0mp0`, `dn0ms0`, `dp1cpp`, `dp1fpp`, `dp1fps`, `dp1fsp`, `dp1fss`, `dp1mpp`, `dp1mps`, `dp1msp`, `dp1mss`, `dp2fss`, `dp2mps`, `dp2mss`, `dp3fp0`, `dp3fs0`, `dp3mp0`, `dp3ms0`, `dr0cs0`, `dt0fp0`, `dt0fs0`, `dt0mp0`, `dt0ms0`, `faa`, `fat`, `fc`, `fca`, `fct`, `fd`, `fe`, `fg`, `fh`, `fia`, `fit`, `fp`, `fpa`, `fpt`, `fs`, `fx`, `fz`, `nc00000`, `nccn000`, `nccp000`, `nccs000`, `ncfn000`, `ncfp000`, `ncfs000`, `ncmn000`, `ncmp000`, `ncms000`, `np00000`, `np0000a`, `np0000d`, `np0000l`, `np0000o`, `np0000p`, `p0000000`, `p010p000`, `p010s000`, `p020p000`, `p020s000`, `p0300000`, `pd0cp000`, `pd0cs000`, `pd0fp000`, `pd0fs000`, `pd0mp000`, `pd0ms000`, `pd0ns000`, `pi0cn000`, `pi0cp000`, `pi0cs000`, `pi0fp000`, `pi0fs000`, `pi0mp0`, `pi0mp000`, `pi0ms000`, `pn0cp000`, `pn0cs000`, `pn0fp000`, `pn0fs000`, `pn0mp000`, `pn0ms000`, `pp1cp000`, `pp1cs000`, `pp1csn00`, `pp1cso00`, `pp2cp000`, `pp2cp00p`, `pp2cs000`, `pp2cs00p`, `pp3cn000`, `pp3cno00`, `pp3cp000`, `pp3csa00`, `pp3csd00`, `pp3fp000`, `pp3fpa00`, `pp3fs000`, `pp3fsa00`, `pp3mp000`, `pp3mpa00`, `pp3ms000`, `pp3msa00`, `pp3nn000`, `pr000000`, `pr0cn000`, `pr0cp000`, `pr0cs0`, `pr0cs000`, `pr0ms000`, `pt000000`, `pt0cs000`, `pt0fp000`, `pt0fs000`, `pt0mp000`, `pt0ms000`, `px1fp0p0`, `px1fs0p0`, `px1ms0p0`, `px3cp0p0`, `px3cs0p0`, `px3fp0s0`, `px3fs000`, `px3fs0s0`, `px3mp000`, `px3ms000`, `rg`, `rn`, `spcmp`, `spcms`, `sps00`, `vag0000`, `vaic1p0`, `vaic3p0`, `vaic3s0`, `vaif1p0`, `vaif1s0`, `vaif2p0`, `vaif3p0`, `vaif3s0`, `vaii1p0`, `vaii1s0`, `vaii3p0`, `vaii3s0`, `vaip1p0`, `vaip1s0`, `vaip2p0`, `vaip2s0`, `vaip3p0`, `vaip3s0`, `van0000`, `vap00sm`, `vasi100`, `vasi1p0`, `vasi3p0`, `vasi3s0`, `vasp1p0`, `vasp3p0`, `vasp3s0`, `vm00000`, `vmg0000`, `vmic1p0`, `vmic1s0`, `vmic3p0`, `vmic3s0`, `vmif1p0`, `vmif1s0`, `vmif2p0`, `vmif3p0`, `vmif3s0`, `vmii1p0`, `vmii1s0`, `vmii3p0`, `vmii3s0`, `vmip1p0`, `vmip1s0`, `vmip2p0`, `vmip2s0`, `vmip3p0`, `vmip3s0`, `vmis3p0`, `vmis3s0`, `vmm01p0`, `vmm02s0`, `vmm03p0`, `vmm03s0`, `vmn0000`, `vmp0000`, `vmp00fs`, `vmp00mp`, `vmp00ms`, `vmp00pf`, `vmp00pm`, `vmp00sf`, `vmp00sm`, `vmsi1p0`, `vmsi1s0`, `vmsi3p0`, `vmsi3s0`, `vmsp1p0`, `vmsp1s0`, `vmsp2p0`, `vmsp2s0`, `vmsp3p0`, `vmsp3s0`, `vsg0000`, `vsic3p0`, `vsic3s0`, `vsif3p0`, `vsif3s0`, `vsii1p0`, `vsii1s0`, `vsii3p0`, `vsii3s0`, `vsip1p0`, `vsip1s0`, `vsip2s0`, `vsip3p0`, `vsip3s0`, `vsis3p0`, `vsis3s0`, `vsm03p0`, `vsm03s0`, `vsn0000`, `vsp00sm`, `vssi3p0`, `vssi3s0`, `vssp1p0`, `vssp3p0`, `vssp3s0`, `zm`, `zp` |
| **`morphologizer`** | `Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `POS=PROPN`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Brck`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Brck`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=NOUN`, `POS=ADP`, `NumType=Card\|Number=Plur\|POS=NUM`, `Gender=Masc\|Number=Plur\|POS=NOUN`, `Number=Sing\|POS=ADJ`, `POS=CCONJ`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `NumForm=Digit\|NumType=Card\|POS=NUM`, `NumForm=Digit\|POS=NOUN`, `Gender=Masc\|Number=Plur\|POS=ADJ`, `POS=PUNCT\|PunctType=Comm`, `POS=AUX\|VerbForm=Inf`, `Case=Acc,Dat\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `POS=PRON\|PronType=Rel`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Plur\|POS=NOUN`, `Gender=Fem\|Number=Plur\|POS=ADJ`, `POS=VERB\|VerbForm=Inf`, `Case=Acc,Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Plur\|POS=ADJ`, `POS=PUNCT\|PunctType=Peri`, `Number=Sing\|POS=PRON\|PronType=Rel`, `Gender=Masc\|Number=Sing\|POS=NOUN`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part`, `POS=SCONJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Def\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `POS=VERB\|VerbForm=Ger`, `POS=NOUN`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part`, `Gender=Fem\|NumType=Ord\|Number=Plur\|POS=ADJ`, `POS=PUNCT\|PunctType=Quot`, `Gender=Masc\|Number=Sing\|POS=ADJ`, `Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `POS=ADV\|Polarity=Neg`, `POS=ADV`, `Number=Sing\|POS=PRON\|PronType=Dem`, `Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=NOUN`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=ADJ`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Tot`, `Case=Loc\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Degree=Cmp\|POS=ADV`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `NumType=Card\|POS=NUM`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Number=Sing\|POS=PRON\|PronType=Ind`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Number=Plur\|POS=DET\|PronType=Ind`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Sing\|POS=DET\|PronType=Ind`, `POS=PUNCT`, `Number=Sing\|POS=DET\|PronType=Rel`, `Case=Gen\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc,Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Degree=Cmp\|Number=Sing\|POS=ADJ`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem,Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Ind`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|PronType=Rel`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `AdvType=Tim\|POS=NOUN`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int`, `POS=PUNCT\|PunctType=Semi`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=3\|VerbForm=Fin`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `NumForm=Digit\|POS=SYM`, `Gender=Masc\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Int`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `POS=PRON\|PronType=Int`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin`, `POS=PART`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Dem`, `POS=ADJ`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Degree=Cmp\|Number=Plur\|POS=ADJ`, `POS=PUNCT\|PunctType=Dash`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Int`, `Gender=Masc\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `POS=PUNCT\|PunctType=Colo`, `Gender=Masc\|NumType=Card\|POS=NUM`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Sing\|POS=PRON\|PronType=Int`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `POS=AUX\|VerbForm=Ger`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Acc,Dat\|Number=Sing\|POS=PRON\|Person=2\|Polite=Infm\|PrepCase=Npr\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Int`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `NumForm=Digit\|NumType=Frac\|POS=NUM`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Dem`, `Gender=Fem\|POS=NOUN`, `Case=Acc,Dat\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Fut\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|Polite=Infm\|PronType=Prs`, `POS=X`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=1\|VerbForm=Fin`, `Number=Sing\|POS=DET\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `NumType=Ord\|Number=Sing\|POS=ADJ`, `Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Number=Plur\|POS=PRON\|PronType=Dem`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin`, `POS=PRON\|PronType=Ind`, `POS=SYM`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin`, `POS=VERB\|VerbForm=Fin`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Pre\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `NumForm=Digit\|NumType=Frac\|POS=SYM`, `NumType=Card\|Number=Sing\|POS=NUM`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Qest`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Qest`, `NumForm=Digit\|NumType=Ord\|POS=ADJ`, `Foreign=Yes\|POS=PRON\|PronType=Int`, `Foreign=Yes\|Mood=Ind\|POS=VERB\|VerbForm=Fin`, `Foreign=Yes\|POS=ADP`, `Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Acc\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Excl`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Excl`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `POS=NUM`, `Mood=Imp\|Number=Plur\|POS=AUX\|Person=3\|VerbForm=Fin`, `Case=Nom\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=1\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Mood=Sub\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Comm`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Comm`, `Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Acc,Dat\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Number=Sing\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `POS=VERB\|Tense=Past\|VerbForm=Part`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Foreign=Yes\|POS=NOUN`, `Definite=Def\|Foreign=Yes\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Foreign=Yes\|POS=VERB`, `Foreign=Yes\|POS=ADJ`, `Foreign=Yes\|POS=DET`, `Foreign=Yes\|POS=ADV`, `Definite=Ind\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `POS=INTJ`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `AdvType=Tim\|POS=SYM`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `AdvType=Tim\|Degree=Cmp\|POS=ADV`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|Polite=Infm\|PrepCase=Pre\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Degree=Cmp\|POS=ADJ`, `POS=DET`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|POS=SYM`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `POS=DET\|PronType=Rel`, `Gender=Fem\|NumType=Card\|POS=NUM`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `POS=AUX\|Tense=Past\|VerbForm=Part`, `Foreign=Yes\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Foreign=Yes\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Foreign=Yes\|POS=SCONJ`, `Foreign=Yes\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Definite=Ind\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=PROPN`, `Number=Plur\|POS=DET\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel` |
| **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `case`, `cc`, `ccomp`, `compound`, `conj`, `cop`, `csubj`, `dep`, `det`, `expl:pass`, `fixed`, `flat`, `iobj`, `mark`, `nmod`, `nsubj`, `nummod`, `obj`, `obl`, `parataxis`, `punct`, `xcomp` |
| **`ner`** | `LOC`, `MISC`, `ORG`, `PER` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TAG_ACC` | 97.52 |
| `POS_ACC` | 99.24 |
| `MORPH_ACC` | 98.96 |
| `LEMMA_ACC` | 96.48 |
| `DEP_UAS` | 95.25 |
| `DEP_LAS` | 93.46 |
| `SENTS_P` | 99.71 |
| `SENTS_R` | 99.77 |
| `SENTS_F` | 99.74 |
| `ENTS_F` | 89.77 |
| `ENTS_P` | 89.94 |
| `ENTS_R` | 89.60 |
| `TRANSFORMER_LOSS` | 13983585.54 |
| `TAGGER_LOSS` | 637551.95 |
| `MORPHOLOGIZER_LOSS` | 349270.61 |
| `PARSER_LOSS` | 3321140.98 |
| `NER_LOSS` | 89131.89 | |
anas-awadalla/roberta-large-initialization-seed-0 | anas-awadalla | "2022-05-13T16:46:52Z" | 8 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | "2022-05-13T14:36:47Z" | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-large-initialization-seed-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-initialization-seed-0
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 0
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 24
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
anniezx/111 | anniezx | "2022-03-08T08:38:09Z" | 0 | 0 | null | [
"license:artistic-2.0",
"region:us"
] | null | "2022-03-08T08:38:09Z" | ---
license: artistic-2.0
---
|
Larbz-7/swin-tiny-patch4-window7-224-finetuned-eurosat | Larbz-7 | "2024-06-04T02:22:07Z" | 219 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-06-03T23:03:14Z" | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1335
- Accuracy: 0.5414
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 2.3862 | 0.9994 | 788 | 2.2541 | 0.5365 |
| 2.1651 | 2.0 | 1577 | 2.1688 | 0.5395 |
| 2.1559 | 2.9981 | 2364 | 2.1335 | 0.5414 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
harsha19/wet | harsha19 | "2024-10-10T22:38:13Z" | 51 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2024-09-26T00:48:31Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: rups
---
# Rupss
<!-- <Gallery /> -->
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `rups` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('harshasai-dev/rupss', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
NeuML/language-id-quantized | NeuML | "2025-01-26T17:20:48Z" | 1,178 | 1 | staticvectors | [
"staticvectors",
"safetensors",
"text-classification",
"language-identification",
"multilingual",
"base_model:NeuML/language-id",
"base_model:finetune:NeuML/language-id",
"license:cc-by-sa-3.0",
"8-bit",
"region:us"
] | text-classification | "2025-01-23T22:41:16Z" | ---
tags:
- text-classification
- language-identification
inference: false
license: cc-by-sa-3.0
language: multilingual
library_name: staticvectors
base_model:
- NeuML/language-id
---
# Language Detection with StaticVectors
This model is an export of this [FastText Language Identification model](https://fasttext.cc/docs/en/language-identification.html) for [`staticvectors`](https://github.com/neuml/staticvectors). `staticvectors` enables running inference in Python with NumPy. This helps it maintain solid runtime performance.
Language detection is an important task and identification with n-gram models is an efficient and highly accurate way to do it.
_This model is a quantized version of the [base language id model](https://hf.co/neuml/language-id). It's using 2x256 Product Quantization like the original quantized model from FastText. This shrinks this model down to 4MB with only a minor hit on accuracy._
## Usage with StaticVectors
```python
from staticvectors import StaticVectors
model = StaticVectors("neuml/language-id-quantized")
model.predict(["What language is this text?"])
```
|
karrrr123456/aaaaaaaa | karrrr123456 | "2025-03-16T23:09:57Z" | 0 | 0 | transformers | [
"transformers",
"deepseek_v3",
"text-generation",
"conversational",
"custom_code",
"arxiv:2501.12948",
"license:mit",
"autotrain_compatible",
"fp8",
"region:us"
] | text-generation | "2025-03-16T23:05:40Z" | ---
license: mit
library_name: transformers
---
# DeepSeek-R1
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
<img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
<img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
<img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE" style="margin: 2px;">
<img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<p align="center">
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf"><b>Paper Link</b>👁️</a>
</p>
## 1. Introduction
We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1.
DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning.
With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors.
However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. To address these issues and further enhance reasoning performance,
we introduce DeepSeek-R1, which incorporates cold-start data before RL.
DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.
To support the research community, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six dense models distilled from DeepSeek-R1 based on Llama and Qwen. DeepSeek-R1-Distill-Qwen-32B outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.
**NOTE: Before running DeepSeek-R1 series models locally, we kindly recommend reviewing the [Usage Recommendation](#usage-recommendations) section.**
<p align="center">
<img width="80%" src="figures/benchmark.jpg">
</p>
## 2. Model Summary
---
**Post-Training: Large-Scale Reinforcement Learning on the Base Model**
- We directly apply reinforcement learning (RL) to the base model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach allows the model to explore chain-of-thought (CoT) for solving complex problems, resulting in the development of DeepSeek-R1-Zero. DeepSeek-R1-Zero demonstrates capabilities such as self-verification, reflection, and generating long CoTs, marking a significant milestone for the research community. Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT. This breakthrough paves the way for future advancements in this area.
- We introduce our pipeline to develop DeepSeek-R1. The pipeline incorporates two RL stages aimed at discovering improved reasoning patterns and aligning with human preferences, as well as two SFT stages that serve as the seed for the model's reasoning and non-reasoning capabilities.
We believe the pipeline will benefit the industry by creating better models.
---
**Distillation: Smaller Models Can Be Powerful Too**
- We demonstrate that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models. The open source DeepSeek-R1, as well as its API, will benefit the research community to distill better smaller models in the future.
- Using the reasoning data generated by DeepSeek-R1, we fine-tuned several dense models that are widely used in the research community. The evaluation results demonstrate that the distilled smaller dense models perform exceptionally well on benchmarks. We open-source distilled 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on Qwen2.5 and Llama3 series to the community.
## 3. Model Downloads
### DeepSeek-R1 Models
<div align="center">
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
| :------------: | :------------: | :------------: | :------------: | :------------: |
| DeepSeek-R1-Zero | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero) |
| DeepSeek-R1 | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1) |
</div>
DeepSeek-R1-Zero & DeepSeek-R1 are trained based on DeepSeek-V3-Base.
For more details regarding the model architecture, please refer to [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repository.
### DeepSeek-R1-Distill Models
<div align="center">
| **Model** | **Base Model** | **Download** |
| :------------: | :------------: | :------------: |
| DeepSeek-R1-Distill-Qwen-1.5B | [Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) |
| DeepSeek-R1-Distill-Qwen-7B | [Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) |
| DeepSeek-R1-Distill-Llama-8B | [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) |
| DeepSeek-R1-Distill-Qwen-14B | [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) |
|DeepSeek-R1-Distill-Qwen-32B | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) |
| DeepSeek-R1-Distill-Llama-70B | [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) |
</div>
DeepSeek-R1-Distill models are fine-tuned based on open-source models, using samples generated by DeepSeek-R1.
We slightly change their configs and tokenizers. Please use our setting to run these models.
## 4. Evaluation Results
### DeepSeek-R1-Evaluation
For all our models, the maximum generation length is set to 32,768 tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 64 responses per query to estimate pass@1.
<div align="center">
| Category | Benchmark (Metric) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | DeepSeek V3 | OpenAI o1-mini | OpenAI o1-1217 | DeepSeek R1 |
|----------|-------------------|----------------------|------------|--------------|----------------|------------|--------------|
| | Architecture | - | - | MoE | - | - | MoE |
| | # Activated Params | - | - | 37B | - | - | 37B |
| | # Total Params | - | - | 671B | - | - | 671B |
| English | MMLU (Pass@1) | 88.3 | 87.2 | 88.5 | 85.2 | **91.8** | 90.8 |
| | MMLU-Redux (EM) | 88.9 | 88.0 | 89.1 | 86.7 | - | **92.9** |
| | MMLU-Pro (EM) | 78.0 | 72.6 | 75.9 | 80.3 | - | **84.0** |
| | DROP (3-shot F1) | 88.3 | 83.7 | 91.6 | 83.9 | 90.2 | **92.2** |
| | IF-Eval (Prompt Strict) | **86.5** | 84.3 | 86.1 | 84.8 | - | 83.3 |
| | GPQA-Diamond (Pass@1) | 65.0 | 49.9 | 59.1 | 60.0 | **75.7** | 71.5 |
| | SimpleQA (Correct) | 28.4 | 38.2 | 24.9 | 7.0 | **47.0** | 30.1 |
| | FRAMES (Acc.) | 72.5 | 80.5 | 73.3 | 76.9 | - | **82.5** |
| | AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | **87.6** |
| | ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | **92.3** |
| Code | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | **65.9** |
| | Codeforces (Percentile) | 20.3 | 23.6 | 58.7 | 93.4 | **96.6** | 96.3 |
| | Codeforces (Rating) | 717 | 759 | 1134 | 1820 | **2061** | 2029 |
| | SWE Verified (Resolved) | **50.8** | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 |
| | Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | **61.7** | 53.3 |
| Math | AIME 2024 (Pass@1) | 16.0 | 9.3 | 39.2 | 63.6 | 79.2 | **79.8** |
| | MATH-500 (Pass@1) | 78.3 | 74.6 | 90.2 | 90.0 | 96.4 | **97.3** |
| | CNMO 2024 (Pass@1) | 13.1 | 10.8 | 43.2 | 67.6 | - | **78.8** |
| Chinese | CLUEWSC (EM) | 85.4 | 87.9 | 90.9 | 89.9 | - | **92.8** |
| | C-Eval (EM) | 76.7 | 76.0 | 86.5 | 68.9 | - | **91.8** |
| | C-SimpleQA (Correct) | 55.4 | 58.7 | **68.0** | 40.3 | - | 63.7 |
</div>
### Distilled Model Evaluation
<div align="center">
| Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating |
|------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------|
| GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 |
| Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 |
| o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | **1820** |
| QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 |
| DeepSeek-R1-Distill-Qwen-1.5B | 28.9 | 52.7 | 83.9 | 33.8 | 16.9 | 954 |
| DeepSeek-R1-Distill-Qwen-7B | 55.5 | 83.3 | 92.8 | 49.1 | 37.6 | 1189 |
| DeepSeek-R1-Distill-Qwen-14B | 69.7 | 80.0 | 93.9 | 59.1 | 53.1 | 1481 |
| DeepSeek-R1-Distill-Qwen-32B | **72.6** | 83.3 | 94.3 | 62.1 | 57.2 | 1691 |
| DeepSeek-R1-Distill-Llama-8B | 50.4 | 80.0 | 89.1 | 49.0 | 39.6 | 1205 |
| DeepSeek-R1-Distill-Llama-70B | 70.0 | **86.7** | **94.5** | **65.2** | **57.5** | 1633 |
</div>
## 5. Chat Website & API Platform
You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com), and switch on the button "DeepThink"
We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/)
## 6. How to Run Locally
### DeepSeek-R1 Models
Please visit [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repo for more information about running DeepSeek-R1 locally.
**NOTE: Hugging Face's Transformers has not been directly supported yet.**
### DeepSeek-R1-Distill Models
DeepSeek-R1-Distill models can be utilized in the same manner as Qwen or Llama models.
For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm):
```shell
vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager
```
You can also easily start a service using [SGLang](https://github.com/sgl-project/sglang)
```bash
python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --trust-remote-code --tp 2
```
### Usage Recommendations
**We recommend adhering to the following configurations when utilizing the DeepSeek-R1 series models, including benchmarking, to achieve the expected performance:**
1. Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs.
2. **Avoid adding a system prompt; all instructions should be contained within the user prompt.**
3. For mathematical problems, it is advisable to include a directive in your prompt such as: "Please reason step by step, and put your final answer within \boxed{}."
4. When evaluating model performance, it is recommended to conduct multiple tests and average the results.
Additionally, we have observed that the DeepSeek-R1 series models tend to bypass thinking pattern (i.e., outputting "\<think\>\n\n\</think\>") when responding to certain queries, which can adversely affect the model's performance.
**To ensure that the model engages in thorough reasoning, we recommend enforcing the model to initiate its response with "\<think\>\n" at the beginning of every output.**
## 7. License
This code repository and the model weights are licensed under the [MIT License](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE).
DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that:
- DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from [Qwen-2.5 series](https://github.com/QwenLM/Qwen2.5), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE), and now finetuned with 800k samples curated with DeepSeek-R1.
- DeepSeek-R1-Distill-Llama-8B is derived from Llama3.1-8B-Base and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE).
- DeepSeek-R1-Distill-Llama-70B is derived from Llama3.3-70B-Instruct and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE).
## 8. Citation
```
@misc{deepseekai2025deepseekr1incentivizingreasoningcapability,
title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning},
author={DeepSeek-AI and Daya Guo and Dejian Yang and Haowei Zhang and Junxiao Song and Ruoyu Zhang and Runxin Xu and Qihao Zhu and Shirong Ma and Peiyi Wang and Xiao Bi and Xiaokang Zhang and Xingkai Yu and Yu Wu and Z. F. Wu and Zhibin Gou and Zhihong Shao and Zhuoshu Li and Ziyi Gao and Aixin Liu and Bing Xue and Bingxuan Wang and Bochao Wu and Bei Feng and Chengda Lu and Chenggang Zhao and Chengqi Deng and Chenyu Zhang and Chong Ruan and Damai Dai and Deli Chen and Dongjie Ji and Erhang Li and Fangyun Lin and Fucong Dai and Fuli Luo and Guangbo Hao and Guanting Chen and Guowei Li and H. Zhang and Han Bao and Hanwei Xu and Haocheng Wang and Honghui Ding and Huajian Xin and Huazuo Gao and Hui Qu and Hui Li and Jianzhong Guo and Jiashi Li and Jiawei Wang and Jingchang Chen and Jingyang Yuan and Junjie Qiu and Junlong Li and J. L. Cai and Jiaqi Ni and Jian Liang and Jin Chen and Kai Dong and Kai Hu and Kaige Gao and Kang Guan and Kexin Huang and Kuai Yu and Lean Wang and Lecong Zhang and Liang Zhao and Litong Wang and Liyue Zhang and Lei Xu and Leyi Xia and Mingchuan Zhang and Minghua Zhang and Minghui Tang and Meng Li and Miaojun Wang and Mingming Li and Ning Tian and Panpan Huang and Peng Zhang and Qiancheng Wang and Qinyu Chen and Qiushi Du and Ruiqi Ge and Ruisong Zhang and Ruizhe Pan and Runji Wang and R. J. Chen and R. L. Jin and Ruyi Chen and Shanghao Lu and Shangyan Zhou and Shanhuang Chen and Shengfeng Ye and Shiyu Wang and Shuiping Yu and Shunfeng Zhou and Shuting Pan and S. S. Li and Shuang Zhou and Shaoqing Wu and Shengfeng Ye and Tao Yun and Tian Pei and Tianyu Sun and T. Wang and Wangding Zeng and Wanjia Zhao and Wen Liu and Wenfeng Liang and Wenjun Gao and Wenqin Yu and Wentao Zhang and W. L. Xiao and Wei An and Xiaodong Liu and Xiaohan Wang and Xiaokang Chen and Xiaotao Nie and Xin Cheng and Xin Liu and Xin Xie and Xingchao Liu and Xinyu Yang and Xinyuan Li and Xuecheng Su and Xuheng Lin and X. Q. Li and Xiangyue Jin and Xiaojin Shen and Xiaosha Chen and Xiaowen Sun and Xiaoxiang Wang and Xinnan Song and Xinyi Zhou and Xianzu Wang and Xinxia Shan and Y. K. Li and Y. Q. Wang and Y. X. Wei and Yang Zhang and Yanhong Xu and Yao Li and Yao Zhao and Yaofeng Sun and Yaohui Wang and Yi Yu and Yichao Zhang and Yifan Shi and Yiliang Xiong and Ying He and Yishi Piao and Yisong Wang and Yixuan Tan and Yiyang Ma and Yiyuan Liu and Yongqiang Guo and Yuan Ou and Yuduan Wang and Yue Gong and Yuheng Zou and Yujia He and Yunfan Xiong and Yuxiang Luo and Yuxiang You and Yuxuan Liu and Yuyang Zhou and Y. X. Zhu and Yanhong Xu and Yanping Huang and Yaohui Li and Yi Zheng and Yuchen Zhu and Yunxian Ma and Ying Tang and Yukun Zha and Yuting Yan and Z. Z. Ren and Zehui Ren and Zhangli Sha and Zhe Fu and Zhean Xu and Zhenda Xie and Zhengyan Zhang and Zhewen Hao and Zhicheng Ma and Zhigang Yan and Zhiyu Wu and Zihui Gu and Zijia Zhu and Zijun Liu and Zilin Li and Ziwei Xie and Ziyang Song and Zizheng Pan and Zhen Huang and Zhipeng Xu and Zhongyu Zhang and Zhen Zhang},
year={2025},
eprint={2501.12948},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.12948},
}
```
## 9. Contact
If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
|
Stern5497/org_modelorg_model | Stern5497 | "2024-05-24T05:28:51Z" | 1 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | "2024-05-10T20:29:53Z" | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: org_modelorg_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# org_modelorg_model
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0305
- F1 Micro: 0.7988
- F1 Macro: 0.7745
- F1 Weighted: 0.8091
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Micro | F1 Macro | F1 Weighted |
|:-------------:|:------:|:----:|:---------------:|:--------:|:--------:|:-----------:|
| 1.7847 | 0.0064 | 25 | 1.4983 | 0.7827 | 0.7547 | 0.7929 |
| 1.3333 | 0.0127 | 50 | 1.2986 | 0.7926 | 0.7660 | 0.8031 |
| 1.2721 | 0.0191 | 75 | 1.2255 | 0.7755 | 0.7520 | 0.7862 |
| 1.127 | 0.0255 | 100 | 1.1722 | 0.7945 | 0.7694 | 0.8053 |
| 1.1108 | 0.0318 | 125 | 1.1561 | 0.7922 | 0.7556 | 0.7971 |
| 1.0969 | 0.0382 | 150 | 1.1181 | 0.7875 | 0.7581 | 0.7955 |
| 1.0714 | 0.0446 | 175 | 1.1001 | 0.7884 | 0.7658 | 0.7993 |
| 1.0219 | 0.0510 | 200 | 1.0758 | 0.8000 | 0.7727 | 0.8091 |
| 1.0979 | 0.0573 | 225 | 1.0671 | 0.7973 | 0.7656 | 0.8040 |
| 1.0846 | 0.0637 | 250 | 1.0632 | 0.7866 | 0.7582 | 0.7944 |
| 0.9977 | 0.0701 | 275 | 1.0590 | 0.7934 | 0.7600 | 0.7991 |
| 1.1262 | 0.0764 | 300 | 1.0404 | 0.7984 | 0.7699 | 0.8066 |
| 1.0066 | 0.0828 | 325 | 1.0396 | 0.7981 | 0.7681 | 0.8053 |
| 1.0534 | 0.0892 | 350 | 1.0360 | 0.8005 | 0.7768 | 0.8113 |
| 1.0302 | 0.0955 | 375 | 1.0320 | 0.7993 | 0.7754 | 0.8099 |
| 1.0965 | 0.1019 | 400 | 1.0305 | 0.7988 | 0.7745 | 0.8091 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.2
- Pytorch 2.3.0+cu118
- Datasets 2.19.0
- Tokenizers 0.19.1 |
StrangeSX/Saraa-8B | StrangeSX | "2024-05-10T11:14:14Z" | 7 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-05-10T10:42:54Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** StrangeSX
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/skuma307_-_MedPaxTral-2x7b-8bits | RichardErkhov | "2024-05-16T03:29:08Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-05-16T03:21:05Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
MedPaxTral-2x7b - bnb 8bits
- Model creator: https://huggingface.co/skuma307/
- Original model: https://huggingface.co/skuma307/MedPaxTral-2x7b/
Original model description:
---
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- medical
---
A medical MoEs developed through the amalgamation of three leading models in the medical domain: BioMistral, Meditron, and Medalpaca. This fusion has been meticulously achieved using the MergeKit library, a cutting-edge tool designed to blend multiple models' strengths into a unified, powerful LLM.
|
jeeyoung/dpo69607th_trial | jeeyoung | "2024-05-29T18:23:47Z" | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:beomi/KoAlpaca-Polyglot-5.8B",
"base_model:adapter:beomi/KoAlpaca-Polyglot-5.8B",
"region:us"
] | null | "2024-05-29T18:23:39Z" | ---
library_name: peft
base_model: beomi/KoAlpaca-Polyglot-5.8B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.0 |
fhamborg/phi-4-4bit-bnb | fhamborg | "2025-02-20T10:14:51Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"phi",
"phi4",
"nlp",
"math",
"code",
"chat",
"conversational",
"custom_code",
"en",
"base_model:microsoft/phi-4",
"base_model:quantized:microsoft/phi-4",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2025-02-20T09:34:36Z" | ---
license: mit
license_link: https://huggingface.co/microsoft/phi-4/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- phi
- phi4
- nlp
- math
- code
- chat
- conversational
base_model: microsoft/phi-4
library_name: transformers
---
# Phi-4 GPTQ (4-bit Quantized)
[](https://huggingface.co/fhamborg/phi-4-4bit-gptq)
## Model Description
This is a **4-bit quantized** version of the Phi-4 transformer model, optimized for **efficient inference** while maintaining performance.
- **Base Model**: [Phi-4](https://huggingface.co/...)
- **Quantization**: bnb (4-bit)
- **Format**: `safetensors`
- **Tokenizer**: Uses standard `vocab.json` and `merges.txt`
## Intended Use
- Fast inference with minimal VRAM usage
- Deployment in resource-constrained environments
- Optimized for **low-latency text generation**
## Model Details
| Attribute | Value |
|-----------------|-------|
| **Model Name** | Phi-4 GPTQ |
| **Quantization** | 4-bit (GPTQ) |
| **File Format** | `.safetensors` |
| **Tokenizer** | `phi-4-tokenizer.json` |
| **VRAM Usage** | ~X GB (depending on batch size) |
|
llmvetter/ppo-SnowballTarget | llmvetter | "2024-06-13T10:35:04Z" | 7 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | "2024-06-13T10:34:58Z" | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: llmvetter/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
bnurpek/gpt2-256t-nr1wr-pos-20 | bnurpek | "2024-01-08T07:56:23Z" | 34 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | reinforcement-learning | "2024-01-08T07:55:45Z" | ---
license: apache-2.0
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="bnurpek//tmp/tmp3573x6o2/bnurpek/gpt2-256t-nr1wr-pos-20")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("bnurpek//tmp/tmp3573x6o2/bnurpek/gpt2-256t-nr1wr-pos-20")
model = AutoModelForCausalLMWithValueHead.from_pretrained("bnurpek//tmp/tmp3573x6o2/bnurpek/gpt2-256t-nr1wr-pos-20")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
torchxrayvision/densenet121-res224-pc | torchxrayvision | "2022-06-21T20:09:59Z" | 27 | 0 | transformers | [
"transformers",
"vision",
"image-classification",
"dataset:nih-pc-chex-mimic_ch-google-openi-rsna",
"arxiv:2111.00595",
"arxiv:2002.02497",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-classification | "2022-06-21T13:03:00Z" |
---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- nih-pc-chex-mimic_ch-google-openi-rsna
---
# densenet121-res224-pc
A DenseNet is a type of convolutional neural network that utilises dense connections between layers, through Dense Blocks, where we connect all layers (with matching feature-map sizes) directly with each other. To preserve the feed-forward nature, each layer obtains additional inputs from all preceding layers and passes on its own feature-maps to all subsequent layers.
### How to use
Here is how to use this model to classify an image of xray:
Note: Each pretrained model has 18 outputs. The `all` model has every output trained. However, for the other weights some targets are not trained and will predict randomly becuase they do not exist in the training dataset. The only valid outputs are listed in the field `{dataset}.pathologies` on the dataset that corresponds to the weights.
Benchmarks of the modes are here: [BENCHMARKS.md](https://github.com/mlmed/torchxrayvision/blob/master/BENCHMARKS.md)
```python
import urllib.request
import skimage
import torch
import torch.nn.functional as F
import torchvision
import torchvision.transforms
import torchxrayvision as xrv
model_name = "densenet121-res224-pc"
img_url = "https://huggingface.co/spaces/torchxrayvision/torchxrayvision-classifier/resolve/main/16747_3_1.jpg"
img_path = "xray.jpg"
urllib.request.urlretrieve(img_url, img_path)
model = xrv.models.get_model(model_name, from_hf_hub=True)
img = skimage.io.imread(img_path)
img = xrv.datasets.normalize(img, 255)
# Check that images are 2D arrays
if len(img.shape) > 2:
img = img[:, :, 0]
if len(img.shape) < 2:
print("error, dimension lower than 2 for image")
# Add color channel
img = img[None, :, :]
transform = torchvision.transforms.Compose([xrv.datasets.XRayCenterCrop()])
img = transform(img)
with torch.no_grad():
img = torch.from_numpy(img).unsqueeze(0)
preds = model(img).cpu()
output = {
k: float(v)
for k, v in zip(xrv.datasets.default_pathologies, preds[0].detach().numpy())
}
print(output)
```
For more code examples, we refer to the [example scripts](https://github.com/kamalkraj/torchxrayvision/blob/master/scripts).
### Citation
Primary TorchXRayVision paper: [https://arxiv.org/abs/2111.00595](https://arxiv.org/abs/2111.00595)
```
Joseph Paul Cohen, Joseph D. Viviano, Paul Bertin, Paul Morrison, Parsa Torabian, Matteo Guarrera, Matthew P Lungren, Akshay Chaudhari, Rupert Brooks, Mohammad Hashir, Hadrien Bertrand
TorchXRayVision: A library of chest X-ray datasets and models.
https://github.com/mlmed/torchxrayvision, 2020
@article{Cohen2020xrv,
author = {Cohen, Joseph Paul and Viviano, Joseph D. and Bertin, Paul and Morrison, Paul and Torabian, Parsa and Guarrera, Matteo and Lungren, Matthew P and Chaudhari, Akshay and Brooks, Rupert and Hashir, Mohammad and Bertrand, Hadrien},
journal = {https://github.com/mlmed/torchxrayvision},
title = {{TorchXRayVision: A library of chest X-ray datasets and models}},
url = {https://github.com/mlmed/torchxrayvision},
year = {2020}
arxivId = {2111.00595},
}
```
and this paper which initiated development of the library: [https://arxiv.org/abs/2002.02497](https://arxiv.org/abs/2002.02497)
```
Joseph Paul Cohen and Mohammad Hashir and Rupert Brooks and Hadrien Bertrand
On the limits of cross-domain generalization in automated X-ray prediction.
Medical Imaging with Deep Learning 2020 (Online: https://arxiv.org/abs/2002.02497)
@inproceedings{cohen2020limits,
title={On the limits of cross-domain generalization in automated X-ray prediction},
author={Cohen, Joseph Paul and Hashir, Mohammad and Brooks, Rupert and Bertrand, Hadrien},
booktitle={Medical Imaging with Deep Learning},
year={2020},
url={https://arxiv.org/abs/2002.02497}
}
```
|
teasan/WeddingImperial | teasan | "2023-11-26T08:30:19Z" | 0 | 7 | diffusers | [
"diffusers",
"anime",
"art",
"stable-diffusion",
"ja",
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-06-28T09:56:33Z" | ---
license: creativeml-openrail-m
language:
- ja
tags:
- anime
- art
- stable-diffusion
library_name: diffusers
---
<!--  -->
# WeddingImperialについて
## 概要
普段イラスト・アニメ系ばかりマージしている自分が制作したフォト系モデルになります。
今回V3を作るにあたり、不自然なコントラストの濃さの軽減と被写体及び全体の光量を調整しています。
特にフォトモデルは実写出力になるので、被写体にかかる明るさを調整することで、より美しい作例が出ることを意識しました。
推奨設定はあくまで参考程度にと思っていてください。
## CHANGE LOG
- WeddingImperialV3の追加
- WeddingImperialV2の追加
- WeddingImperialV1の追加
## 使い方
モデルをcloneもしくはDLした後、以下に格納してください。
```
webui\models\Stable-diffusion\
```
## 推奨設定(作者の設定)
- Steps: 50
- Sampler: DPM++ 2M Karras
- CFG scale: 10
- Denoising strength: 0.55ぐらい?
- Clip skip: 2
- Hires upscale: 2
- Hires steps: 10
- Hires upscaler: R-ESRGAN 4x+ or R-ESRGAN 4x+ Anime6Bなど
- VAE:mse840000_klf8anime_klf8anime2
<details>
<summary>WeddingImperialV2</summary>
<div>
- Steps: 30か50
- Sampler: DPM++ 2M Karras
- CFG scale: 11
- Denoising strength: 0.55ぐらい?
- Clip skip: 2
- Hires upscale: 2
- Hires steps: 10
- Hires upscaler: Latent nearestなど
- VAE:mse840000_klf8anime_klf8anime2
~~~~~~~~~
</div>
</details>
<details>
<summary>WeddingImperialV1</summary>
<div>
- Steps: 30~40
- Sampler: DPM++ 2M Karras
- CFG scale: 7~11
- Denoising strength: 0.55ぐらい?
- Clip skip: 2
- Hires upscale: 2
- Hires steps: 10
- Hires upscaler: Latent nearestなど
~~~~~~~~~
</div>
</details>
## 推奨NP
```
(negative_hand-neg:1.2):25 ], (worst quality, bad quality, low quality, normal quality:2), (extra fingers, deformed hands, polydactyl:1.5), (bad hands, bad fingers, bad arm, missing finger:1.5), text, nsfw, watermark
※ [ :(negative_hand-neg:1.2):25 ] の25はStepsの数値の半分を設定。
```
<details>
<summary>WeddingImperialV2</summary>
<div>
```
EasyNegative, BraV4Neg, [ :(negative_hand-neg:1.2):15 ], (worst quality, bad quality:1.4), (extra fingers, deformed hands, polydactyl:1.5), (bad hands, bad fingers, bad arm, missing finger:1.5), text, nsfw
```
~~~~~~~~~
</div>
</details>
<details>
<summary>WeddingImperialV1</summary>
<div>
```
EasyNegative, [ :(negative_hand-neg:1.2):15 ], text, (nsfw:1.2),
もしくは
(worst quality, bad quality:1.4), [ :(negative_hand-neg:1.2):15 ], text, (nsfw:1.2),
など。
※必要に応じmonochromeも。
```
~~~~~~~~~
</div>
</details>
## 作例
<details>
<summary>WeddingImperialV3</summary>
<div>

```
beautiful person, perm hair, yellow hair, light blue eye,
(GIGANTIC HUGE BREAST:0.6),
sweater, (trench coat:1.2), Denim,
snow,
Negative prompt: (negative_hand-neg:1.2):25 ], (worst quality, bad quality, low quality, normal quality:2), (extra fingers, deformed hands, polydactyl:1.5), (bad hands, bad fingers, bad arm, missing finger:1.5), text, nsfw, watermark
```

```
high quality, highres, ultra detail, realistic, pov,
1girl, cute, long hair, bikini, GIGANTIC HUGE BREAST, girl sitting,
Negative prompt: (negative_hand-neg:1.2):25 ], (worst quality, bad quality, low quality, normal quality:2), (extra fingers, deformed hands, polydactyl:1.5), (bad hands, bad fingers, bad arm, missing finger:1.5), text, nsfw, watermark
```

```
1girl,
Negative prompt: (negative_hand-neg:1.2):25 ], (worst quality, bad quality, low quality, normal quality:2), (extra fingers, deformed hands, polydactyl:1.5), (bad hands, bad fingers, bad arm, missing finger:1.5), text, nsfw, watermark
```

```
beautiful person,
waves hair, natural blonde hair, hazel eye,
(GIGANTIC HUGE BREAST:0.6),
sweater, (trench coat:1.2),
Christmas tree,
light effects, sparkle effects, (deep color hues effects:1.2), (silky feel effects:1.1),
Negative prompt: (negative_hand-neg:1.2):25 ], (worst quality, bad quality, low quality, normal quality:2), (extra fingers, deformed hands, polydactyl:1.5), (bad hands, bad fingers, bad arm, missing finger:1.5), text, nsfw, watermark
```
~~~~~~~~~
</div>
</details>
<details>
<summary>WeddingImperialV2</summary>
<div>

```
▲プロンプト
high quality, highres, ultra detail, realistic, 1girl, cute, long hair, blond hair,
```

```
▲プロンプト
high quality, highres, ultra detail, realistic, 1girl, cute,
```

```
▲プロンプト
high quality, highres, ultra detail, realistic, 1girl, cute, long hair, blond hair, street at night,
```

```
▲プロンプト
high quality, highres, ultra detail, realistic, pov, 1girl, cute, long hair, bikini, GIGANTIC HUGE BREAST, girl sitting,
```
~~~~~~~~~
</div>
</details>
<details>
<summary>WeddingImperialV1</summary>
<div>

```
▲プロンプト
absurdres, highres, upper body, side view,
1girl, bridal costume, church, stained glass, Light Effects, crown,
Negative prompt: EasyNegative, [ :(negative_hand-neg:1.2):15 ], text, (nsfw:1.2),
```

```
▲プロンプト
absurdres, highres, (official art, beautiful and aesthetic:1.2), close view,
1girl, shining sky, vast world, gazing, awe-inspiring expression, distant horizon, clouds, high hill, natural beauty, inspiration, night sky, Shining Stars, DOF,
Negative prompt: EasyNegative, [ :(negative_hand-neg:1.2):15 ], text, (nsfw:1.2),
```

EasyNegativeあり
```
▲プロンプト
absurdres, highres, (official art, beautiful and aesthetic:1.2), close view,
1girl, cute,
Negative prompt: EasyNegative, [ :(negative_hand-neg:1.2):15 ], text, (nsfw:1.2),
```

EasyNegativeなし
```
▲プロンプト
absurdres, highres, (official art, beautiful and aesthetic:1.2), close view,
1girl, cute,
Negative prompt: (worst quality, bad quality:1.4), [ :(negative_hand-neg:1.2):15 ], text, (nsfw:1.2),
```

```
▲プロンプト
absurdres, highres,
1male,
Negative prompt: EasyNegative, [ :(negative_hand-neg:1.2):15 ], text, (nsfw:1.2),
```

```
▲プロンプト
absurdres, highres,
cat, cute,
Negative prompt: EasyNegative, [ :(negative_hand-neg:1.2):15 ], text, (nsfw:1.2),
```
~~~~~~~~~
</div>
</details>
---
# 免責事項
- 本モデルを使用して作成された画像に関しては、個々の利用者に委ねておりますので、生成された画像に関する如何なる問題や係争について、モデル製作者は一切の責任を負いません。
- 本モデルはアダルトコンテンツを目的とした用途を想定しておりません。成人向けコンテンツを生成し、発生した問題についてはモデル製作者は一切の責任を負いません。
- ライセンスに関して問題が発生した場合は、本モデルを予告なく削除させて頂く可能性があります。ご了承ください。
- 犯罪への利用や医療用などの専門的な用途への使用は禁止されております。ライセンス不履行による過失については、モデル製作者は一切の責任を負いません。
---
# Stable Diffusionのライセンスについて
- このモデルはオープンアクセスで誰でも利用可能であり、CreativeML OpenRAIL-Mライセンスでさらに権利と使用方法が規定されています。
- CreativeML OpenRAILライセンスでは、次のように規定されています。
1. このモデルを使用して、違法または有害な出力やコンテンツを意図的に作成したり、共有したりすることはできません。
2. 作者はあなたが生成した出力に対していかなる権利も主張しません。あなたはそれらを自由に使用することができますが、ライセンスで定められた規定を守ってください。利用は自己責任でお願いします。
3. あなたはウェイトを再配布し、モデルを商業的またはサービスとして使用することができます。その場合、ライセンスにあるものと同じ使用制限を含め、CreativeML OpenRAIL-Mのコピーをあなたのすべてのユーザーに共有しなければならないことに注意してください(ライセンスを完全にかつ注意深く読んでください)。
- (ライセンスの全文: [https://huggingface.co/spaces/CompVis/stable-diffusion-license](https://huggingface.co/spaces/CompVis/stable-diffusion-license))
---
# 作者について
twitter:<a href="https://twitter.com/wims_Tea" target="_blank"> https://twitter.com/wims_Tea</a>
--- |
memevis/NT11 | memevis | "2025-02-20T19:10:32Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-20T19:05:03Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aengusl/gibberish | aengusl | "2024-05-14T18:39:29Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-05-14T18:38:38Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
eliot-hub/en_pipeline | eliot-hub | "2023-10-28T10:47:24Z" | 6 | 0 | spacy | [
"spacy",
"token-classification",
"en",
"model-index",
"region:us"
] | token-classification | "2023-10-26T09:02:22Z" | ---
tags:
- spacy
- token-classification
language:
- en
model-index:
- name: en_pipeline
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.9175338189
- name: NER Recall
type: recall
value: 0.9087863953
- name: NER F Score
type: f_score
value: 0.9131391586
---
This model was trained with spaCy (distilbert-base-uncased transformer) to perform NER on resumes.
| Feature | Description |
| --- | --- |
| **Name** | `en_pipeline` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.7.2,<3.8.0` |
| **Default Pipeline** | `transformer`, `ner` |
| **Components** | `transformer`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (4 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `COMPANY`, `DIPLOMA`, `JOB_TITLE`, `SKILL` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 91.31 |
| `ENTS_P` | 91.75 |
| `ENTS_R` | 90.88 |
|
songfeng/output_models_ast_falcon | songfeng | "2024-01-31T00:37:44Z" | 2 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | "2024-01-30T22:48:34Z" | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: output_models_ast_falcon
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output_models_ast_falcon
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 5
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1 |
vtayyab6/llama-3-8b-Instruct-bnb-4bit-aiaustin-demo | vtayyab6 | "2024-05-25T21:48:16Z" | 6 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-05-25T21:46:00Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** vtayyab6
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
crocutacrocuto/dinov2-base-MEG5-5 | crocutacrocuto | "2025-03-05T06:38:55Z" | 10 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"dinov2",
"image-classification",
"generated_from_trainer",
"base_model:facebook/dinov2-base",
"base_model:finetune:facebook/dinov2-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2025-02-27T01:22:31Z" | ---
library_name: transformers
license: apache-2.0
base_model: facebook/dinov2-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dinov2-base-MEG5-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dinov2-base-MEG5-5
This model is a fine-tuned version of [facebook/dinov2-base](https://huggingface.co/facebook/dinov2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6964
- Accuracy: 0.8704
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|
| 0.1774 | 1.0000 | 12653 | 0.6342 | 0.8221 |
| 0.1053 | 2.0 | 25307 | 0.5782 | 0.8484 |
| 0.0592 | 3.0000 | 37960 | 0.5909 | 0.8606 |
| 0.0452 | 4.0 | 50614 | 0.6203 | 0.8698 |
| 0.0121 | 4.9998 | 63265 | 0.6964 | 0.8704 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.20.3
|
Yeetables/xlm-roberta-base-finetuned-panx-de | Yeetables | "2023-09-27T22:54:54Z" | 103 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-09-25T23:23:43Z" | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: validation
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8551200724966017
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1348
- F1: 0.8551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 263 | 0.1629 | 0.8173 |
| 0.2088 | 2.0 | 526 | 0.1385 | 0.8445 |
| 0.2088 | 3.0 | 789 | 0.1348 | 0.8551 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
CristianR8/vit-base-cocoa | CristianR8 | "2024-12-17T12:14:03Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"vision",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-12-09T20:40:35Z" | ---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- image-classification
- vision
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base-cocoa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-cocoa
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the SemilleroCV/Cocoa-dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2061
- Accuracy: 0.9278
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:-----:|:--------:|:---------------:|
| 0.3733 | 1.0 | 196 | 0.9025 | 0.3558 |
| 0.3727 | 2.0 | 392 | 0.8989 | 0.4098 |
| 0.3901 | 3.0 | 588 | 0.8989 | 0.2668 |
| 0.3421 | 4.0 | 784 | 0.9170 | 0.2612 |
| 0.2703 | 5.0 | 980 | 0.9278 | 0.2061 |
| 0.1734 | 6.0 | 1176 | 0.9278 | 0.2568 |
| 0.1385 | 7.0 | 1372 | 0.9206 | 0.3242 |
| 0.3237 | 8.0 | 1568 | 0.9386 | 0.2922 |
| 0.236 | 9.0 | 1764 | 0.9386 | 0.3044 |
| 0.2124 | 10.0 | 1960 | 0.9061 | 0.3848 |
| 0.0454 | 11.0 | 2156 | 0.9350 | 0.3527 |
| 0.0756 | 12.0 | 2352 | 0.9350 | 0.2844 |
| 0.0605 | 13.0 | 2548 | 0.9314 | 0.3077 |
| 0.0214 | 14.0 | 2744 | 0.9025 | 0.6295 |
| 0.1816 | 15.0 | 2940 | 0.9386 | 0.2996 |
| 0.0338 | 16.0 | 3136 | 0.9278 | 0.3597 |
| 0.2136 | 17.0 | 3332 | 0.9314 | 0.4070 |
| 0.188 | 18.0 | 3528 | 0.9458 | 0.3532 |
| 0.0539 | 19.0 | 3724 | 0.9386 | 0.3843 |
| 0.0992 | 20.0 | 3920 | 0.9422 | 0.3904 |
| 0.0019 | 21.0 | 4116 | 0.9458 | 0.3732 |
| 0.0348 | 22.0 | 4312 | 0.9386 | 0.4021 |
| 0.0823 | 23.0 | 4508 | 0.9350 | 0.4217 |
| 0.1125 | 24.0 | 4704 | 0.9097 | 0.4704 |
| 0.0173 | 25.0 | 4900 | 0.9350 | 0.3700 |
| 0.0442 | 26.0 | 5096 | 0.9314 | 0.3725 |
| 0.0009 | 27.0 | 5292 | 0.9278 | 0.4819 |
| 0.0087 | 28.0 | 5488 | 0.9170 | 0.6492 |
| 0.0021 | 29.0 | 5684 | 0.9242 | 0.5297 |
| 0.2552 | 30.0 | 5880 | 0.9314 | 0.4482 |
| 0.0154 | 31.0 | 6076 | 0.9242 | 0.6075 |
| 0.0009 | 32.0 | 6272 | 0.9350 | 0.4101 |
| 0.1626 | 33.0 | 6468 | 0.9350 | 0.4653 |
| 0.0276 | 34.0 | 6664 | 0.9386 | 0.4174 |
| 0.0139 | 35.0 | 6860 | 0.9422 | 0.3992 |
| 0.0023 | 36.0 | 7056 | 0.9170 | 0.6972 |
| 0.1264 | 37.0 | 7252 | 0.9314 | 0.4980 |
| 0.0113 | 38.0 | 7448 | 0.9170 | 0.7154 |
| 0.0694 | 39.0 | 7644 | 0.9242 | 0.5443 |
| 0.0976 | 40.0 | 7840 | 0.9350 | 0.3852 |
| 0.1191 | 41.0 | 8036 | 0.9242 | 0.5398 |
| 0.1249 | 42.0 | 8232 | 0.9170 | 0.6197 |
| 0.0002 | 43.0 | 8428 | 0.9134 | 0.6967 |
| 0.1163 | 44.0 | 8624 | 0.9242 | 0.5697 |
| 0.0201 | 45.0 | 8820 | 0.9134 | 0.7221 |
| 0.0003 | 46.0 | 9016 | 0.9314 | 0.5253 |
| 0.0224 | 47.0 | 9212 | 0.9495 | 0.3817 |
| 0.0183 | 48.0 | 9408 | 0.9242 | 0.4966 |
| 0.0077 | 49.0 | 9604 | 0.9458 | 0.4349 |
| 0.0083 | 50.0 | 9800 | 0.9242 | 0.5191 |
| 0.0571 | 51.0 | 9996 | 0.9206 | 0.5826 |
| 0.0583 | 52.0 | 10192 | 0.9170 | 0.5335 |
| 0.0019 | 53.0 | 10388 | 0.9206 | 0.5843 |
| 0.0044 | 54.0 | 10584 | 0.9206 | 0.5895 |
| 0.0065 | 55.0 | 10780 | 0.9350 | 0.4487 |
| 0.0126 | 56.0 | 10976 | 0.9314 | 0.6221 |
| 0.0093 | 57.0 | 11172 | 0.9314 | 0.5138 |
| 0.0004 | 58.0 | 11368 | 0.9314 | 0.5162 |
| 0.0002 | 59.0 | 11564 | 0.9350 | 0.4514 |
| 0.1463 | 60.0 | 11760 | 0.9386 | 0.4744 |
| 0.0001 | 61.0 | 11956 | 0.9314 | 0.5338 |
| 0.0006 | 62.0 | 12152 | 0.9278 | 0.5788 |
| 0.0269 | 63.0 | 12348 | 0.9278 | 0.5500 |
| 0.1 | 64.0 | 12544 | 0.9206 | 0.6467 |
| 0.0004 | 65.0 | 12740 | 0.9242 | 0.5828 |
| 0.0001 | 66.0 | 12936 | 0.9314 | 0.5283 |
| 0.0001 | 67.0 | 13132 | 0.9206 | 0.6212 |
| 0.0002 | 68.0 | 13328 | 0.9242 | 0.4973 |
| 0.0058 | 69.0 | 13524 | 0.9278 | 0.5021 |
| 0.0605 | 70.0 | 13720 | 0.9170 | 0.6982 |
| 0.0006 | 71.0 | 13916 | 0.9350 | 0.4602 |
| 0.0021 | 72.0 | 14112 | 0.9314 | 0.5595 |
| 0.0004 | 73.0 | 14308 | 0.9386 | 0.4366 |
| 0.0124 | 74.0 | 14504 | 0.9134 | 0.7612 |
| 0.0284 | 75.0 | 14700 | 0.9206 | 0.6054 |
| 0.0001 | 76.0 | 14896 | 0.9242 | 0.5922 |
| 0.0119 | 77.0 | 15092 | 0.9242 | 0.5496 |
| 0.0006 | 78.0 | 15288 | 0.9206 | 0.6327 |
| 0.0711 | 79.0 | 15484 | 0.9386 | 0.5177 |
| 0.0001 | 80.0 | 15680 | 0.9134 | 0.7391 |
| 0.0985 | 81.0 | 15876 | 0.9242 | 0.5683 |
| 0.0001 | 82.0 | 16072 | 0.9206 | 0.6106 |
| 0.0 | 83.0 | 16268 | 0.9242 | 0.6235 |
| 0.0006 | 84.0 | 16464 | 0.9061 | 0.7914 |
| 0.0001 | 85.0 | 16660 | 0.9314 | 0.5649 |
| 0.0 | 86.0 | 16856 | 0.9350 | 0.5512 |
| 0.066 | 87.0 | 17052 | 0.9350 | 0.5473 |
| 0.0189 | 88.0 | 17248 | 0.9386 | 0.4866 |
| 0.0 | 89.0 | 17444 | 0.9386 | 0.5136 |
| 0.0001 | 90.0 | 17640 | 0.9350 | 0.5246 |
| 0.0001 | 91.0 | 17836 | 0.9314 | 0.5626 |
| 0.0037 | 92.0 | 18032 | 0.9350 | 0.5335 |
| 0.0999 | 93.0 | 18228 | 0.9242 | 0.6357 |
| 0.1124 | 94.0 | 18424 | 0.9278 | 0.5905 |
| 0.0175 | 95.0 | 18620 | 0.9206 | 0.6618 |
| 0.0001 | 96.0 | 18816 | 0.9386 | 0.5588 |
| 0.0259 | 97.0 | 19012 | 0.9350 | 0.5549 |
| 0.0001 | 98.0 | 19208 | 0.9350 | 0.5599 |
| 0.0285 | 99.0 | 19404 | 0.9350 | 0.5517 |
| 0.003 | 100.0 | 19600 | 0.9350 | 0.5570 |
### Framework versions
- Transformers 4.48.0.dev0
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.21.0
|
MaziyarPanahi/mergekit-slerp-rxkhjnf-GGUF | MaziyarPanahi | "2024-06-16T20:32:15Z" | 28 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:NousResearch/Meta-Llama-3-8B-Instruct",
"base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:mergekit-community/mergekit-slerp-rxkhjnf",
"base_model:quantized:mergekit-community/mergekit-slerp-rxkhjnf"
] | text-generation | "2024-06-16T20:06:33Z" | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- llama
- text-generation
- mergekit
- merge
- conversational
- base_model:NousResearch/Meta-Llama-3-8B-Instruct
- base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- text-generation
model_name: mergekit-slerp-rxkhjnf-GGUF
base_model: mergekit-community/mergekit-slerp-rxkhjnf
inference: false
model_creator: mergekit-community
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/mergekit-slerp-rxkhjnf-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-rxkhjnf-GGUF)
- Model creator: [mergekit-community](https://huggingface.co/mergekit-community)
- Original model: [mergekit-community/mergekit-slerp-rxkhjnf](https://huggingface.co/mergekit-community/mergekit-slerp-rxkhjnf)
## Description
[MaziyarPanahi/mergekit-slerp-rxkhjnf-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-rxkhjnf-GGUF) contains GGUF format model files for [mergekit-community/mergekit-slerp-rxkhjnf](https://huggingface.co/mergekit-community/mergekit-slerp-rxkhjnf).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
Lilforex/Lilforex | Lilforex | "2025-03-25T15:44:49Z" | 0 | 1 | null | [
"license:artistic-2.0",
"region:us"
] | null | "2025-03-25T15:44:49Z" | ---
license: artistic-2.0
---
|
CharlesLi/mistral_sky_safe_o1_llama_3_70B_default_1000_1000_full | CharlesLi | "2025-01-14T22:46:18Z" | 8 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-14T22:28:18Z" | ---
library_name: transformers
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.1
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- generator
model-index:
- name: mistral_sky_safe_o1_llama_3_70B_default_1000_1000_full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_sky_safe_o1_llama_3_70B_default_1000_1000_full
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6371
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
lisabdunlap/mistral-24b-all-no-blarb-id-json | lisabdunlap | "2025-04-10T06:57:39Z" | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.3-70B-Instruct-bnb-4bit",
"base_model:finetune:unsloth/Llama-3.3-70B-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-04-10T06:57:38Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
second-state/Triplex-GGUF | second-state | "2024-07-31T04:47:50Z" | 24 | 0 | transformers | [
"transformers",
"gguf",
"phi3",
"text-generation",
"custom_code",
"base_model:SciPhi/Triplex",
"base_model:quantized:SciPhi/Triplex",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-07-31T04:30:30Z" | ---
license: cc-by-nc-sa-4.0
pipeline_tag: text-generation
base_model: SciPhi/Triplex
model_creator: SciPhi
model_name: Triplex
quantized_by: Second State Inc.
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Triplex-GGUF
## Original Model
[SciPhi/Triplex](https://huggingface.co/SciPhi/Triplex)
## Run with LlamaEdge
- LlamaEdge version: coming soon
<!-- - LlamaEdge version: [v0.12.3](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.12.3) and above
- Prompt template
- Prompt type: `chatml`
- Prompt string
```text
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
- Context size: `32000`
- Run as LlamaEdge service
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Triplex-Q5_K_M.gguf \
llama-api-server.wasm \
--prompt-template chatml \
--ctx-size 32000 \
--model-name Triplex
```
- Run as LlamaEdge command app
```bash
wasmedge --dir .:. \
--nn-preload default:GGML:AUTO:Triplex-Q5_K_M.gguf \
llama-chat.wasm \
--prompt-template chatml \
--ctx-size 32000
``` -->
## Quantized GGUF Models
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [Triplex-Q2_K.gguf](https://huggingface.co/second-state/Triplex-GGUF/blob/main/Triplex-Q2_K.gguf) | Q2_K | 2 | 1.42 GB| smallest, significant quality loss - not recommended for most purposes |
| [Triplex-Q3_K_L.gguf](https://huggingface.co/second-state/Triplex-GGUF/blob/main/Triplex-Q3_K_L.gguf) | Q3_K_L | 3 | 2.09 GB| small, substantial quality loss |
| [Triplex-Q3_K_M.gguf](https://huggingface.co/second-state/Triplex-GGUF/blob/main/Triplex-Q3_K_M.gguf) | Q3_K_M | 3 | 1.96 GB| very small, high quality loss |
| [Triplex-Q3_K_S.gguf](https://huggingface.co/second-state/Triplex-GGUF/blob/main/Triplex-Q3_K_S.gguf) | Q3_K_S | 3 | 1.68 GB| very small, high quality loss |
| [Triplex-Q4_0.gguf](https://huggingface.co/second-state/Triplex-GGUF/blob/main/Triplex-Q4_0.gguf) | Q4_0 | 4 | 2.18 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
| [Triplex-Q4_K_M.gguf](https://huggingface.co/second-state/Triplex-GGUF/blob/main/Triplex-Q4_K_M.gguf) | Q4_K_M | 4 | 2.39 GB| medium, balanced quality - recommended |
| [Triplex-Q4_K_S.gguf](https://huggingface.co/second-state/Triplex-GGUF/blob/main/Triplex-Q4_K_S.gguf) | Q4_K_S | 4 | 2.19 GB| small, greater quality loss |
| [Triplex-Q5_0.gguf](https://huggingface.co/second-state/Triplex-GGUF/blob/main/Triplex-Q5_0.gguf) | Q5_0 | 5 | 2.64 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
| [Triplex-Q5_K_M.gguf](https://huggingface.co/second-state/Triplex-GGUF/blob/main/Triplex-Q5_K_M.gguf) | Q5_K_M | 5 | 2.82 GB| large, very low quality loss - recommended |
| [Triplex-Q5_K_S.gguf](https://huggingface.co/second-state/Triplex-GGUF/blob/main/Triplex-Q5_K_S.gguf) | Q5_K_S | 5 | 2.64 GB| large, low quality loss - recommended |
| [Triplex-Q6_K.gguf](https://huggingface.co/second-state/Triplex-GGUF/blob/main/Triplex-Q6_K.gguf) | Q6_K | 6 | 3.14 GB| very large, extremely low quality loss |
| [Triplex-Q8_0.gguf](https://huggingface.co/second-state/Triplex-GGUF/blob/main/Triplex-Q8_0.gguf) | Q8_0 | 8 | 4.06 GB| very large, extremely low quality loss - not recommended |
| [Triplex-f16.gguf](https://huggingface.co/second-state/Triplex-GGUF/blob/main/Triplex-f16.gguf) | f16 | 16 | 7.64 GB| |
*Quantized with llama.cpp b3463* |
ronigold/dictalm2.0-instruct-fine-tuned | ronigold | "2024-05-10T13:43:37Z" | 5,599 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-06T10:55:11Z" | ---
license: mit
---
# Model Card for ronigold/dictalm2.0-instruct-fine-tuned
This is a fine-tuned version of the Dicta-IL dictalm2.0-instruct model, specifically tailored for generating question-answer pairs based on Hebrew Wikipedia excerpts.
The model was fine-tuned to improve its ability in understanding and generating natural questions and their corresponding answers in Hebrew.
## Model Details
### Model Description
The model, ronigold/dictalm2.0-instruct-fine-tuned, is a fine-tuned version of the dictalm2.0-instruct model on a synthetically generated dataset. This dataset was created by the model itself using excerpts from the Hebrew Wikipedia, which then were used to generate questions and answers, thereby enriching the model's capacity in this specific task.
- **Developed by:** Roni Goldshmidt
- **Model type:** Transformer-based, fine-tuned Dicta-IL dictalm2.0-instruct
- **Language(s) (NLP):** Hebrew
- **License:** MIT
- **Finetuned from:** dicta-il/dictalm2.0-instruct
## Uses
### Direct Use
The model is ideal for educational and informational applications, where generating contextual question-answer pairs from textual content is needed, particularly in the Hebrew language.
### Out-of-Scope Use
The model is not intended for generating answers where factual accuracy from unverified sources is critical, such as medical advice or legal information.
## Bias, Risks, and Limitations
While the model is robust in generating context-relevant Q&A pairs, it may still inherit or amplify biases present in the training data, which primarily comes from Wikipedia. Users should critically evaluate the model output, especially in sensitive contexts.
### Recommendations
It is recommended to use this model with an additional layer of human oversight when used in sensitive or critical applications to ensure the accuracy and appropriateness of the content generated.
## How to Get Started with the Model
To get started, load the model using the Transformers library by Hugging Face:
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model_name = "ronigold/dictalm2.0-instruct-fine-tuned"
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Training Details
### Training Data
The training data consists of synthetic question-answer pairs generated from the Hebrew Wikipedia. This data was then used to fine-tune the model using specific loss functions and optimization strategies to improve its performance in generating similar pairs.
```python
# Example of setting up training in PyTorch using the Transformers library
from transformers import Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=3, # number of training epochs
per_device_train_batch_size=16, # batch size per device during training
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='./logs', # directory for storing logs
logging_steps=10,
)
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=eval_dataset # evaluation dataset
)
trainer.train()
```
### Training Procedure
#### Training Hyperparameters
- **Training regime:** Mixed precision training (fp16) to optimize GPU usage and speed up training while maintaining precision.
```python
# Configuration for mixed precision training
from transformers import set_seed
set_seed(42) # Set seed for reproducibility
# Adding mixed precision policy
from torch.cuda.amp import GradScaler, autocast
scaler = GradScaler()
# Training loop
for epoch in range(int(training_args.num_train_epochs)):
model.train()
for batch in train_dataloader:
optim.zero_grad()
with autocast(): # applies mixed precision
outputs = model(**batch)
loss = outputs.loss
scaler.scale(loss).backward()
scaler.step(optim)
scaler.update()
```
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
The model was evaluated on a separate holdout set, also generated synthetically in a similar manner as the training set.
#### Factors
- **Domains:** The evaluation considered various domains within the Hebrew Wikipedia to ensure generalizability across different types of content.
- **Difficulty:** The questions varied in complexity to test the model's ability to handle both straightforward and more complex queries.
#### Metrics
The evaluation metrics used include F1 score and exact match (EM), measuring the accuracy of the answers generated by the model.
### Results
The model achieved an F1 score of 88% and an exact match rate of 75%, indicating strong performance in generating accurate answers, especially in context to the synthesized questions.
## Technical Specifications
### Model Architecture and Objective
The model follows a transformer-based architecture with modifications to optimize for question generation and answering tasks.
### Compute Infrastructure
Training was performed on cloud GPUs, specifically using NVIDIA Tesla V100s, which provided the necessary compute power for efficient training.
## Environmental Impact
<!-- Optional section: Discuss any measures taken to mitigate environmental impact during training, such as using renewable energy sources or carbon offsets. -->
## Citation
**BibTeX:**
```bibtex
@misc{ronigold_dictalm2.0_instruct_finetuned_2024,
author = {Goldshmidt, Roni},
title = {Hebrew QA Fine-tuned Model},
year = {2024},
publisher = {Hugging Face's Model Hub},
journal = {Hugging Face's Model Hub}
}
```
## More Information
For more detailed usage, including advanced configurations and tips, refer to the repository README or contact the model authors. This model is part of a broader initiative to enhance NLP capabilities in the Hebrew language, aiming to support developers and researchers interested in applying advanced AI techniques to Hebrew texts.
## Model Card Authors
- **Roni Goldshmidt:** Main researcher and developer of the fine-tuned model.
## Model Card Contact
For any questions or feedback about the model, contact via Hugging Face profile or directly at [email protected]. |
adamcochrane/lora_model | adamcochrane | "2025-03-05T18:42:29Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-02-24T22:44:16Z" | ---
base_model: unsloth/phi-3.5-mini-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** adamcochrane
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3.5-mini-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
iafcy/Qwen2.5-32B-Task1-Critic-3epoch | iafcy | "2025-03-16T13:50:16Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-16T13:49:50Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
katboi01/rare-puppers | katboi01 | "2022-11-19T15:04:01Z" | 186 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2022-11-19T15:03:49Z" | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.89552241563797
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### corgi

#### samoyed

#### shiba inu
 |
loubnabnl/Llama-8B-Instruct-Bespoke-H4-GBS500k | loubnabnl | "2025-01-25T15:47:56Z" | 13 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"dataset:HuggingFaceH4/Bespoke-Stratos-17k",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-25T13:32:40Z" | ---
base_model: meta-llama/Llama-3.1-8B-Instruct
datasets: HuggingFaceH4/Bespoke-Stratos-17k
library_name: transformers
model_name: Llama-8B-Instruct-Bespoke-H4-GBS500k
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Llama-8B-Instruct-Bespoke-H4-GBS500k
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on the [HuggingFaceH4/Bespoke-Stratos-17k](https://huggingface.co/datasets/HuggingFaceH4/Bespoke-Stratos-17k) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="loubnabnl/Llama-8B-Instruct-Bespoke-H4-GBS500k", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/loubnabnl/huggingface/runs/2f2uixck)
This model was trained with SFT.
### Framework versions
- TRL: 0.14.0.dev0
- Transformers: 4.48.1
- Pytorch: 2.5.1+cu121
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Helsinki-NLP/opus-mt-lua-en | Helsinki-NLP | "2023-08-16T12:00:29Z" | 111 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"lua",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-03-02T23:29:04Z" | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-lua-en
* source languages: lua
* target languages: en
* OPUS readme: [lua-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lua-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lua-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lua-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lua-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lua.en | 34.4 | 0.502 |
|
MaziyarPanahi/Mistral-7B-model_45k6e2e4-Mistral-7B-Instruct-v0.1 | MaziyarPanahi | "2024-01-17T05:38:52Z" | 21 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"Safetensors",
"text-generation-inference",
"merge",
"7b",
"mistralai/Mistral-7B-Instruct-v0.1",
"pankajmathur/Mistral-7B-model_45k6e2e4",
"pytorch",
"en",
"dataset:pankajmathur/orca_mini_v1_dataset",
"dataset:pankajmathur/WizardLM_Orca",
"dataset:pankajmathur/dolly-v2_orca",
"dataset:pankajmathur/alpaca_orca",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-01-17T05:33:44Z" | ---
license: apache-2.0
tags:
- Safetensors
- mistral
- text-generation-inference
- merge
- mistral
- 7b
- mistralai/Mistral-7B-Instruct-v0.1
- pankajmathur/Mistral-7B-model_45k6e2e4
- transformers
- pytorch
- mistral
- text-generation
- en
- dataset:pankajmathur/orca_mini_v1_dataset
- dataset:pankajmathur/WizardLM_Orca
- dataset:pankajmathur/dolly-v2_orca
- dataset:pankajmathur/alpaca_orca
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
---
# Mistral-7B-model_45k6e2e4-Mistral-7B-Instruct-v0.1
Mistral-7B-model_45k6e2e4-Mistral-7B-Instruct-v0.1 is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
* [pankajmathur/Mistral-7B-model_45k6e2e4](https://huggingface.co/pankajmathur/Mistral-7B-model_45k6e2e4)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.1
layer_range: [0, 32]
- model: pankajmathur/Mistral-7B-model_45k6e2e4
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/Mistral-7B-model_45k6e2e4-Mistral-7B-Instruct-v0.1"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
uukuguy/speechless-codellama-34b-v2.0 | uukuguy | "2023-12-30T11:50:32Z" | 1,409 | 17 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-2",
"code",
"en",
"dataset:jondurbin/airoboros-2.2",
"dataset:Open-Orca/OpenOrca",
"dataset:garage-bAInd/Open-Platypus",
"dataset:WizardLM/WizardLM_evol_instruct_V2_196k",
"arxiv:2308.12950",
"license:llama2",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-10-04T09:56:38Z" | ---
language:
- en
library_name: transformers
pipeline_tag: text-generation
datasets:
- jondurbin/airoboros-2.2
- Open-Orca/OpenOrca
- garage-bAInd/Open-Platypus
- WizardLM/WizardLM_evol_instruct_V2_196k
tags:
- llama-2
- code
license: llama2
model-index:
- name: SpeechlessCoder
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 75.61
verified: false
---
<p><h1> speechless-codellama-34b-v2.0 </h1></p>
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/speechless-codellama-34b-v2.0-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/speechless-codellama-34b-v2.0-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/speechless-codellama-34b-v2.0-GGUF)
Code: https://github.com/uukuguy/speechless
Use the following datasets to fine-tune codellama/CodeLlama-34B in order to improve the model's inference and planning capabilities.
Total 153,013 samples.
- jondurbin/airoboros-2.2: Filter categories related to coding, reasoning and planning. 23,462 samples.
- Open-Orca/OpenOrca: Filter the 'cot' category in 1M GPT4 dataset. 74,440 samples.
- garage-bAInd/Open-Platypus: 100%, 24,926 samples.
- WizardLM/WizardLM_evol_instruct_V2_196k: Coding coversation part. 30,185 samples
## How to Prompt the Model
This model accepts the Alpaca instruction format.
For example:
```
You are an intelligent programming assistant.
### Instruction:
Implement a linked list in C++
### Response:
```
## HumanEval
| human-eval | pass@1 |
| --- | --- |
| humaneval-python | 75.61 |
[Big Code Models Leaderboard](https://huggingface.co/spaces/bigcode/bigcode-models-leaderboard)
| Models | pass@1 |
|------ | ------ |
| Phind-CodeLlama-34B-v2| 71.95|
| WizardCoder-Python-34B-V1.0| 70.73|
| Phind-CodeLlama-34B-Python-v1| 70.22|
| Phind-CodeLlama-34B-v1| 65.85|
| WizardCoder-Python-13B-V1.0| 62.19|
| WizardCoder-15B-V1.0| 58.12|
| CodeLlama-34B-Python| 53.29|
| CodeLlama-34B-Instruct| 50.79|
| CodeLlama-13B-Instruct| 50.6|
| CodeLlama-34B| 45.11|
| CodeLlama-13B-Python| 42.89|
| CodeLlama-13B| 35.07|
## NL2SQL
SQL-EVAL: 125/175 (71.43%)
Average rate of exact match: 67.43%
Average correct rate: 71.43%
- GPT4: 130/175 (74.29%)
- GPT3-Turbo-0613: 105/174 (60.00%)
## lm-evaluation-harness
[Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
| Metric | Value |
| --- | --- |
| ARC | 54.35 |
| HellaSwag | 75.65 |
| MMLU | 54.67 |
| TruthfulQA | 45.21 |
| Average | 57.47 |
H800-80G x 2
transformers=4.33.0
flash-attn=2.1.0
bitsandbytes=0.41.1
peft=0.5.0
## Training Arguments
| | |
|------ | ------ |
| lr | 2e-4 |
| lr_scheduler_type | cosine |
| weight_decay | 0.0 |
| optim | paged_adamw_8bit |
| flash_attention | True |
| rerope | False |
| max_new_tokens | 8192 |
| num_train_epochs | 3 |
| bits | 4 |
| lora_r | 64 |
| lora_alpha | 16 |
| lora_dropout | 0.05 |
| double_quant | True |
| quant_type | nf4 |
| dataset_format | airoboros |
| mini_batch_size | 4 |
| grandient_accumulation_steps | 16 |
| bf16 | True |
| | |
|------ | ------ |
| epoch | 3.0 |
| etrain_loss | 0.4261 |
| etrain_runtime | 1 day, 14:42:57.87 |
| etrain_samples_per_second | 3.227 |
| etrain_steps_per_second | 0.025 |
| eeval_loss | 0.4537 |
| eeval_runtime | 0:00:36.19 |
| eeval_samples_per_second | 5.525 |
| eeval_steps_per_second | 2.763 |
# **Code Llama**
Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the base 13B version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom.
| | Base Model | Python | Instruct |
| --- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- |
| 7B | [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) | [codellama/CodeLlama-7b-Python-hf](https://huggingface.co/codellama/CodeLlama-7b-Python-hf) | [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) |
| 13B | [codellama/CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf) | [codellama/CodeLlama-13b-Python-hf](https://huggingface.co/codellama/CodeLlama-13b-Python-hf) | [codellama/CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) |
| 34B | [codellama/CodeLlama-34b-hf](https://huggingface.co/codellama/CodeLlama-34b-hf) | [codellama/CodeLlama-34b-Python-hf](https://huggingface.co/codellama/CodeLlama-34b-Python-hf) | [codellama/CodeLlama-34b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-34b-Instruct-hf) |
## Model Use
To use this model, please make sure to install transformers from `main` until the next version is released:
```bash
pip install git+https://github.com/huggingface/transformers.git@main accelerate
```
Model capabilities:
- [x] Code completion.
- [x] Infilling.
- [ ] Instructions / chat.
- [ ] Python specialist.
```python
from transformers import AutoTokenizer
import transformers
import torch
model = "codellama/CodeLlama-13b-hf"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
sequences = pipeline(
'import socket\n\ndef ping_exponential_backoff(host: str):',
do_sample=True,
top_k=10,
temperature=0.1,
top_p=0.95,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=200,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Model Details
*Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs).
**Model Developers** Meta
**Variations** Code Llama comes in three model sizes, and three variants:
* Code Llama: base models designed for general code synthesis and understanding
* Code Llama - Python: designed specifically for Python
* Code Llama - Instruct: for instruction following and safer deployment
All variants are available in sizes of 7B, 13B and 34B parameters.
**This repository contains the base version of the 13B parameters model.**
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Code Llama is an auto-regressive language model that uses an optimized transformer architecture.
**Model Dates** Code Llama and its variants have been trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)" or its [arXiv page](https://arxiv.org/abs/2308.12950).
## Intended Use
**Intended Use Cases** Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications.
**Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants.
## Hardware and Software
**Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster.
**Carbon Footprint** In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Meta’s sustainability program.
## Training Data
All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details).
## Evaluation Results
See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper.
## Ethical Considerations and Limitations
Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-user-guide](https://ai.meta.com/llama/responsible-user-guide).
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_uukuguy__speechless-codellama-34b-v2.0)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 50.96 |
| ARC (25-shot) | 54.35 |
| HellaSwag (10-shot) | 75.65 |
| MMLU (5-shot) | 54.67 |
| TruthfulQA (0-shot) | 45.21 |
| Winogrande (5-shot) | 73.56 |
| GSM8K (5-shot) | 11.6 |
| DROP (3-shot) | 41.71 |
|
theojolliffe/T5-model-1-feedback-3110 | theojolliffe | "2022-10-31T20:00:50Z" | 9 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-10-31T19:04:23Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: T5-model-1-feedback-3110
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# T5-model-1-feedback-3110
This model is a fine-tuned version of [theojolliffe/T5-model-1-feedback-1109](https://huggingface.co/theojolliffe/T5-model-1-feedback-1109) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1605
- Rouge1: 91.3604
- Rouge2: 86.1024
- Rougel: 90.6798
- Rougelsum: 90.7011
- Gen Len: 15.7167
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.2711 | 1.0 | 2279 | 0.2176 | 90.3305 | 83.9311 | 89.4476 | 89.4573 | 15.7 |
| 0.1709 | 2.0 | 4558 | 0.1759 | 91.3226 | 85.9979 | 90.7558 | 90.7395 | 15.5667 |
| 0.1644 | 3.0 | 6837 | 0.1641 | 91.8385 | 86.7529 | 91.1621 | 91.1492 | 15.6792 |
| 0.1606 | 4.0 | 9116 | 0.1605 | 91.3604 | 86.1024 | 90.6798 | 90.7011 | 15.7167 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
tscstudios/u9gxh1baqxe5scrfzp1fhjithsh1_6e5a3fff-721c-416b-8bf8-3b22f8336cb6 | tscstudios | "2025-04-05T10:51:22Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-04-05T10:51:20Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# U9Gxh1Baqxe5Scrfzp1Fhjithsh1_6E5A3Fff 721C 416B 8Bf8 3B22F8336Cb6
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/tscstudios/u9gxh1baqxe5scrfzp1fhjithsh1_6e5a3fff-721c-416b-8bf8-3b22f8336cb6/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('tscstudios/u9gxh1baqxe5scrfzp1fhjithsh1_6e5a3fff-721c-416b-8bf8-3b22f8336cb6', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/tscstudios/u9gxh1baqxe5scrfzp1fhjithsh1_6e5a3fff-721c-416b-8bf8-3b22f8336cb6/discussions) to add images that show off what you’ve made with this LoRA.
|
TARARARAK/HGU_rulebook-Llama3.2-Bllossom-5B_fine-tuning-QLoRA-64_16 | TARARARAK | "2025-03-22T05:51:49Z" | 3 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:Bllossom/llama-3.2-Korean-Bllossom-AICA-5B",
"base_model:adapter:Bllossom/llama-3.2-Korean-Bllossom-AICA-5B",
"license:llama3.2",
"region:us"
] | null | "2025-03-13T02:32:48Z" | ---
base_model: Bllossom/llama-3.2-Korean-Bllossom-AICA-5B
library_name: peft
license: llama3.2
tags:
- generated_from_trainer
model-index:
- name: HGU_rulebook-Llama3.2-Bllossom-5B_fine-tuning-QLoRA-64_16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# HGU_rulebook-Llama3.2-Bllossom-5B_fine-tuning-QLoRA-64_16
This model is a fine-tuned version of [Bllossom/llama-3.2-Korean-Bllossom-AICA-5B](https://huggingface.co/Bllossom/llama-3.2-Korean-Bllossom-AICA-5B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.7052
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1570
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 12.8912 | 0.3182 | 50 | 12.8247 |
| 11.6977 | 0.6364 | 100 | 11.3812 |
| 9.1603 | 0.9547 | 150 | 8.9505 |
| 7.6735 | 1.2729 | 200 | 7.5549 |
| 6.7945 | 1.5911 | 250 | 6.7118 |
| 6.2617 | 1.9093 | 300 | 6.2235 |
| 6.0081 | 2.2275 | 350 | 5.9894 |
| 5.8829 | 2.5457 | 400 | 5.8750 |
| 5.8219 | 2.8640 | 450 | 5.8154 |
| 5.7859 | 3.1822 | 500 | 5.7831 |
| 5.7645 | 3.5004 | 550 | 5.7624 |
| 5.7485 | 3.8186 | 600 | 5.7478 |
| 5.7375 | 4.1368 | 650 | 5.7377 |
| 5.7339 | 4.4551 | 700 | 5.7301 |
| 5.7241 | 4.7733 | 750 | 5.7246 |
| 5.7212 | 5.0915 | 800 | 5.7204 |
| 5.7178 | 5.4097 | 850 | 5.7170 |
| 5.7158 | 5.7279 | 900 | 5.7145 |
| 5.7113 | 6.0461 | 950 | 5.7124 |
| 5.711 | 6.3644 | 1000 | 5.7107 |
| 5.7062 | 6.6826 | 1050 | 5.7093 |
| 5.7075 | 7.0008 | 1100 | 5.7082 |
| 5.7079 | 7.3190 | 1150 | 5.7074 |
| 5.7104 | 7.6372 | 1200 | 5.7067 |
| 5.7046 | 7.9554 | 1250 | 5.7063 |
| 5.7027 | 8.2737 | 1300 | 5.7058 |
| 5.7049 | 8.5919 | 1350 | 5.7056 |
| 5.7032 | 8.9101 | 1400 | 5.7053 |
| 5.7048 | 9.2283 | 1450 | 5.7053 |
| 5.7057 | 9.5465 | 1500 | 5.7052 |
| 5.7035 | 9.8648 | 1550 | 5.7052 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.46.2
- Pytorch 2.0.1+cu118
- Datasets 3.0.0
- Tokenizers 0.20.1 |
prxy5605/effee9df-681e-498f-859c-620e3b5dadcd | prxy5605 | "2025-01-12T21:41:47Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:adapter:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"region:us"
] | null | "2025-01-12T21:05:35Z" | ---
library_name: peft
license: apache-2.0
base_model: teknium/OpenHermes-2.5-Mistral-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: effee9df-681e-498f-859c-620e3b5dadcd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: teknium/OpenHermes-2.5-Mistral-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d254f970f6a54eb1_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d254f970f6a54eb1_train_data.json
type:
field_input: query_content
field_instruction: instruction
field_output: text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: true
hub_model_id: prxy5605/effee9df-681e-498f-859c-620e3b5dadcd
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 400
micro_batch_size: 2
mlflow_experiment_name: /tmp/d254f970f6a54eb1_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: <|im_end|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 784be21e-b8b5-45a5-afe8-5b9b30395585
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 784be21e-b8b5-45a5-afe8-5b9b30395585
warmup_steps: 30
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# effee9df-681e-498f-859c-620e3b5dadcd
This model is a fine-tuned version of [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 30
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 0.6378 |
| 0.001 | 0.0123 | 100 | 0.0005 |
| 0.0 | 0.0246 | 200 | 0.0001 |
| 0.0001 | 0.0369 | 300 | 0.0001 |
| 0.0 | 0.0492 | 400 | 0.0001 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
vocabtrimmer/xlm-v-base-trimmed-es-xnli-es | vocabtrimmer | "2023-04-21T01:49:57Z" | 114 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-04-21T01:46:37Z" | # `vocabtrimmer/xlm-v-base-trimmed-es-xnli-es`
This model is a fine-tuned version of [vocabtrimmer/xlm-v-base-trimmed-es](https://huggingface.co/vocabtrimmer/xlm-v-base-trimmed-es) on the
[xnli](https://huggingface.co/datasets/xnli) (es).
Following metrics are computed on the `test` split of
[xnli](https://huggingface.co/datasets/xnli)(es).
| | eval_f1_micro | eval_recall_micro | eval_precision_micro | eval_f1_macro | eval_recall_macro | eval_precision_macro | eval_accuracy |
|---:|----------------:|--------------------:|-----------------------:|----------------:|--------------------:|-----------------------:|----------------:|
| 0 | 80.58 | 80.58 | 80.58 | 80.56 | 80.58 | 81.19 | 80.58 |
Check the result file [here](https://huggingface.co/vocabtrimmer/xlm-v-base-trimmed-es-xnli-es/raw/main/eval.json). |
PrunaAI/cognitivecomputations-dolphin-2.9.3-mistral-7B-32k-bnb-8bit-smashed | PrunaAI | "2024-07-15T23:54:03Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"pruna-ai",
"conversational",
"base_model:cognitivecomputations/dolphin-2.9.3-mistral-7B-32k",
"base_model:quantized:cognitivecomputations/dolphin-2.9.3-mistral-7B-32k",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-07-15T23:50:50Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: cognitivecomputations/dolphin-2.9.3-mistral-7B-32k
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo cognitivecomputations/dolphin-2.9.3-mistral-7B-32k installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/cognitivecomputations-dolphin-2.9.3-mistral-7B-32k-bnb-8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("cognitivecomputations/dolphin-2.9.3-mistral-7B-32k")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model cognitivecomputations/dolphin-2.9.3-mistral-7B-32k before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
tensorblock/llamav2-LoRaco-7b-merged-GGUF | tensorblock | "2024-12-11T18:27:45Z" | 39 | 0 | transformers | [
"transformers",
"gguf",
"gaudi",
"intel",
"lora",
"TensorBlock",
"GGUF",
"text-generation",
"en",
"es",
"ru",
"de",
"zh",
"fr",
"th",
"pt",
"ca",
"ko",
"uk",
"it",
"ja",
"dataset:timdettmers/openassistant-guanaco",
"base_model:FunDialogues/llamav2-LoRaco-7b-merged",
"base_model:adapter:FunDialogues/llamav2-LoRaco-7b-merged",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-11T15:47:21Z" | ---
library_name: transformers
license: apache-2.0
datasets:
- timdettmers/openassistant-guanaco
pipeline_tag: text-generation
language:
- en
- es
- ru
- de
- zh
- fr
- th
- pt
- ca
- ko
- uk
- it
- ja
tags:
- gaudi
- intel
- lora
- TensorBlock
- GGUF
base_model: FunDialogues/llamav2-LoRaco-7b-merged
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## FunDialogues/llamav2-LoRaco-7b-merged - GGUF
This repo contains GGUF format model files for [FunDialogues/llamav2-LoRaco-7b-merged](https://huggingface.co/FunDialogues/llamav2-LoRaco-7b-merged).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [llamav2-LoRaco-7b-merged-Q2_K.gguf](https://huggingface.co/tensorblock/llamav2-LoRaco-7b-merged-GGUF/blob/main/llamav2-LoRaco-7b-merged-Q2_K.gguf) | Q2_K | 2.533 GB | smallest, significant quality loss - not recommended for most purposes |
| [llamav2-LoRaco-7b-merged-Q3_K_S.gguf](https://huggingface.co/tensorblock/llamav2-LoRaco-7b-merged-GGUF/blob/main/llamav2-LoRaco-7b-merged-Q3_K_S.gguf) | Q3_K_S | 2.948 GB | very small, high quality loss |
| [llamav2-LoRaco-7b-merged-Q3_K_M.gguf](https://huggingface.co/tensorblock/llamav2-LoRaco-7b-merged-GGUF/blob/main/llamav2-LoRaco-7b-merged-Q3_K_M.gguf) | Q3_K_M | 3.298 GB | very small, high quality loss |
| [llamav2-LoRaco-7b-merged-Q3_K_L.gguf](https://huggingface.co/tensorblock/llamav2-LoRaco-7b-merged-GGUF/blob/main/llamav2-LoRaco-7b-merged-Q3_K_L.gguf) | Q3_K_L | 3.597 GB | small, substantial quality loss |
| [llamav2-LoRaco-7b-merged-Q4_0.gguf](https://huggingface.co/tensorblock/llamav2-LoRaco-7b-merged-GGUF/blob/main/llamav2-LoRaco-7b-merged-Q4_0.gguf) | Q4_0 | 3.826 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [llamav2-LoRaco-7b-merged-Q4_K_S.gguf](https://huggingface.co/tensorblock/llamav2-LoRaco-7b-merged-GGUF/blob/main/llamav2-LoRaco-7b-merged-Q4_K_S.gguf) | Q4_K_S | 3.857 GB | small, greater quality loss |
| [llamav2-LoRaco-7b-merged-Q4_K_M.gguf](https://huggingface.co/tensorblock/llamav2-LoRaco-7b-merged-GGUF/blob/main/llamav2-LoRaco-7b-merged-Q4_K_M.gguf) | Q4_K_M | 4.081 GB | medium, balanced quality - recommended |
| [llamav2-LoRaco-7b-merged-Q5_0.gguf](https://huggingface.co/tensorblock/llamav2-LoRaco-7b-merged-GGUF/blob/main/llamav2-LoRaco-7b-merged-Q5_0.gguf) | Q5_0 | 4.652 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [llamav2-LoRaco-7b-merged-Q5_K_S.gguf](https://huggingface.co/tensorblock/llamav2-LoRaco-7b-merged-GGUF/blob/main/llamav2-LoRaco-7b-merged-Q5_K_S.gguf) | Q5_K_S | 4.652 GB | large, low quality loss - recommended |
| [llamav2-LoRaco-7b-merged-Q5_K_M.gguf](https://huggingface.co/tensorblock/llamav2-LoRaco-7b-merged-GGUF/blob/main/llamav2-LoRaco-7b-merged-Q5_K_M.gguf) | Q5_K_M | 4.783 GB | large, very low quality loss - recommended |
| [llamav2-LoRaco-7b-merged-Q6_K.gguf](https://huggingface.co/tensorblock/llamav2-LoRaco-7b-merged-GGUF/blob/main/llamav2-LoRaco-7b-merged-Q6_K.gguf) | Q6_K | 5.529 GB | very large, extremely low quality loss |
| [llamav2-LoRaco-7b-merged-Q8_0.gguf](https://huggingface.co/tensorblock/llamav2-LoRaco-7b-merged-GGUF/blob/main/llamav2-LoRaco-7b-merged-Q8_0.gguf) | Q8_0 | 7.161 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/llamav2-LoRaco-7b-merged-GGUF --include "llamav2-LoRaco-7b-merged-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/llamav2-LoRaco-7b-merged-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
dawn17/MaidStarling-2x7B-base | dawn17 | "2024-04-13T13:53:28Z" | 48 | 1 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-04T20:15:19Z" | ---
license: apache-2.0
---
---
base_model: /Users/dawn/git/models/Silicon-Maid-7B
gate_mode: hidden # one of "hidden", "cheap_embed", or "random"
dtype: bfloat16 # output dtype (float32, float16, or bfloat16)
experts:
- source_model: /Users/dawn/git/models/Silicon-Maid-7B
positive_prompts:
- "roleplay"
- source_model: /Users/dawn/git/models/Starling-LM-7B-beta
positive_prompts:
- "chat"
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
| Metric |Value|
|---------------------------------|----:|
|Avg. |70.76|
|AI2 Reasoning Challenge (25-Shot)|68.43|
|HellaSwag (10-Shot) |86.28|
|MMLU (5-Shot) |60.34|
|TruthfulQA (0-shot) |60.34|
|Winogrande (5-shot) |78.93|
|GSM8k (5-shot) |65.43| |
manupande21/llama3.2-1B_PMC-finetuned-full-model | manupande21 | "2024-10-02T07:38:43Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-10-02T07:34:57Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lesso14/f749f4d6-ced7-410b-992b-c14cf518bede | lesso14 | "2025-02-09T12:24:22Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Mistral-7b-64k",
"base_model:adapter:NousResearch/Yarn-Mistral-7b-64k",
"license:apache-2.0",
"region:us"
] | null | "2025-02-09T11:31:10Z" | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Mistral-7b-64k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f749f4d6-ced7-410b-992b-c14cf518bede
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
auto_find_batch_size: true
base_model: NousResearch/Yarn-Mistral-7b-64k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 271cf64efe5e0063_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/271cf64efe5e0063_train_data.json
type:
field_instruction: document
field_output: summary
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 50
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: true
hub_model_id: lesso14/f749f4d6-ced7-410b-992b-c14cf518bede
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000214
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/G.O.D/271cf64efe5e0063_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e0cc7f7d-a9eb-4ae9-8e42-a26c370e747a
wandb_project: 14a
wandb_run: your_name
wandb_runid: e0cc7f7d-a9eb-4ae9-8e42-a26c370e747a
warmup_steps: 50
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f749f4d6-ced7-410b-992b-c14cf518bede
This model is a fine-tuned version of [NousResearch/Yarn-Mistral-7b-64k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-64k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000214
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | 0.6148 |
| 0.5296 | 0.0155 | 50 | 0.5695 |
| 0.7042 | 0.0310 | 100 | 0.5836 |
| 0.5716 | 0.0464 | 150 | 0.5928 |
| 0.5389 | 0.0619 | 200 | 0.5599 |
| 0.5174 | 0.0774 | 250 | 0.5283 |
| 0.5265 | 0.0929 | 300 | 0.4725 |
| 0.3984 | 0.1084 | 350 | 0.4386 |
| 0.4361 | 0.1238 | 400 | 0.4101 |
| 0.4305 | 0.1393 | 450 | 0.3892 |
| 0.3471 | 0.1548 | 500 | 0.3855 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
jluckyboyj/train_crawdata | jluckyboyj | "2023-11-03T06:33:47Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-11-03T06:33:45Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
YakovElm/Jira5SetFitModel_Train_balance_ratio_Half | YakovElm | "2023-06-10T12:20:43Z" | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | "2023-06-10T12:20:10Z" | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# YakovElm/Jira5SetFitModel_Train_balance_ratio_Half
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("YakovElm/Jira5SetFitModel_Train_balance_ratio_Half")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
nlp-esg-scoring/bert-base-finetuned-esg-TCFD-clean | nlp-esg-scoring | "2022-07-25T07:29:45Z" | 4 | 0 | transformers | [
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-07-25T01:48:03Z" | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: nlp-esg-scoring/bert-base-finetuned-esg-TCFD-clean
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nlp-esg-scoring/bert-base-finetuned-esg-TCFD-clean
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.7816
- Validation Loss: 2.3592
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -571, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.7776 | 2.3647 | 0 |
| 2.7744 | 2.3469 | 1 |
| 2.7683 | 2.3527 | 2 |
| 2.7743 | 2.3708 | 3 |
| 2.7809 | 2.3819 | 4 |
| 2.7674 | 2.3599 | 5 |
| 2.7715 | 2.3541 | 6 |
| 2.7766 | 2.3423 | 7 |
| 2.7834 | 2.3535 | 8 |
| 2.7816 | 2.3592 | 9 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
imomayiz/unsloth-Qwen2.5-1.5B-Instruct-bnb-4bit-9c9cegqr | imomayiz | "2025-04-11T01:34:46Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-04-11T01:17:27Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Nextcloud-AI/opus-mt-ja-tr | Nextcloud-AI | "2023-08-16T11:59:24Z" | 114 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ja",
"tr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2024-02-23T10:45:50Z" | ---
language:
- ja
- tr
tags:
- translation
license: apache-2.0
---
### jpn-tur
* source group: Japanese
* target group: Turkish
* OPUS readme: [jpn-tur](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-tur/README.md)
* model: transformer-align
* source language(s): jpn jpn_Bopo jpn_Hang jpn_Hani jpn_Hira jpn_Kana jpn_Yiii
* target language(s): tur
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-tur/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-tur/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-tur/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.jpn.tur | 16.7 | 0.434 |
### System Info:
- hf_name: jpn-tur
- source_languages: jpn
- target_languages: tur
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-tur/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ja', 'tr']
- src_constituents: {'jpn_Hang', 'jpn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hani', 'jpn_Bopo', 'jpn_Latn', 'jpn_Hira'}
- tgt_constituents: {'tur'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-tur/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-tur/opus-2020-06-17.test.txt
- src_alpha3: jpn
- tgt_alpha3: tur
- short_pair: ja-tr
- chrF2_score: 0.434
- bleu: 16.7
- brevity_penalty: 0.932
- ref_len: 4755.0
- src_name: Japanese
- tgt_name: Turkish
- train_date: 2020-06-17
- src_alpha2: ja
- tgt_alpha2: tr
- prefer_old: False
- long_pair: jpn-tur
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
stablediffusionapi/dreamshapersdxl10 | stablediffusionapi | "2025-01-20T11:31:22Z" | 35 | 1 | diffusers | [
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2023-10-02T04:23:18Z" | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# DreamShaper_SDXL1.0 API Inference

## Get API Key
Get API key from [ModelsLab](https://modelslab.com/), No Payment needed.
Replace Key in below code, change **model_id** to "dreamshapersdxl10"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Try model for free: [Generate Images](https://stablediffusionapi.com/models/dreamshapersdxl10)
Model link: [View model](https://stablediffusionapi.com/models/dreamshapersdxl10)
Credits: [View credits](https://civitai.com/?query=DreamShaper_SDXL1.0)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v4/dreambooth"
payload = json.dumps({
"key": "your_api_key",
"model_id": "dreamshapersdxl10",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
jh1517/taxi_q_learning | jh1517 | "2023-09-26T12:36:46Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-09-26T12:36:06Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi_q_learning
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="jh1517/taxi_q_learning", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
OsmyReal/Ayuda | OsmyReal | "2021-08-28T06:12:44Z" | 0 | 0 | null | [
"region:us"
] | null | "2022-03-02T23:29:04Z" | git lfs install
git clone https://huggingface.co/r3dhummingbird/DialoGPT-medium-joshua |
zelk12/MT3-GP-gemma-2-RPMHv0.1RAt0.25v0.1-9B | zelk12 | "2024-10-15T19:00:26Z" | 17 | 1 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25",
"base_model:merge:zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25",
"base_model:zelk12/recoilme-gemma-2-psy10k-mental_healt-9B-v0.1",
"base_model:merge:zelk12/recoilme-gemma-2-psy10k-mental_healt-9B-v0.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-10-15T18:54:02Z" | ---
base_model:
- zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25
- zelk12/recoilme-gemma-2-psy10k-mental_healt-9B-v0.1
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25](https://huggingface.co/zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25)
* [zelk12/recoilme-gemma-2-psy10k-mental_healt-9B-v0.1](https://huggingface.co/zelk12/recoilme-gemma-2-psy10k-mental_healt-9B-v0.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zelk12/recoilme-gemma-2-psy10k-mental_healt-9B-v0.1
- model: zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25
merge_method: slerp
base_model: zelk12/recoilme-gemma-2-psy10k-mental_healt-9B-v0.1
dtype: bfloat16
parameters:
t: 0.5
```
|
PrunaAI/microsoft-Phi-3-mini-4k-instruct-HQQ-8bit-smashed | PrunaAI | "2025-04-04T05:24:47Z" | 14 | 0 | null | [
"phi3",
"pruna-ai",
"custom_code",
"hqq",
"region:us"
] | null | "2025-03-22T05:57:49Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: ORIGINAL_REPO_NAME
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo ORIGINAL_REPO_NAME installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/microsoft-Phi-3-mini-4k-instruct-HQQ-8bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/microsoft-Phi-3-mini-4k-instruct-HQQ-8bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("ORIGINAL_REPO_NAME")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model ORIGINAL_REPO_NAME before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
gyr66/machine_translation | gyr66 | "2023-12-20T06:23:56Z" | 12 | 0 | transformers | [
"transformers",
"safetensors",
"mbart",
"feature-extraction",
"translation",
"en",
"zh",
"base_model:facebook/mbart-large-cc25",
"base_model:finetune:facebook/mbart-large-cc25",
"endpoints_compatible",
"region:us"
] | translation | "2023-12-20T04:27:46Z" | ---
language:
- en
- zh
metrics:
- sacrebleu
pipeline_tag: translation
base_model: facebook/mbart-large-cc25
---
# eval
This model is a fine-tuned version of [facebook/mbart-large-cc25 ](https://huggingface.co/facebook/mbart-large-cc25) on IWSLT14 En-Zh dataset.
It achieves the following results on the evaluation set:
- eval_loss: 3.8405
- eval_bleu: 3.5173
- eval_gen_len: 21.5826
It achieves the following results on the test set:
- test_loss: 3.8337
- test_bleu: 3.277
- test_gen_len: 21.6287
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 7
- num_epochs: 9
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu117
- Datasets 2.15.0
- Tokenizers 0.15.0 |
mradermacher/Qwen2.5-Coder-0.5B-Instruct-abliterated-GGUF | mradermacher | "2024-11-13T23:49:47Z" | 28 | 0 | transformers | [
"transformers",
"gguf",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"abliterated",
"uncensored",
"en",
"base_model:huihui-ai/Qwen2.5-Coder-0.5B-Instruct-abliterated",
"base_model:quantized:huihui-ai/Qwen2.5-Coder-0.5B-Instruct-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-11-13T23:31:38Z" | ---
base_model: huihui-ai/Qwen2.5-Coder-0.5B-Instruct-abliterated
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/huihui-ai/Qwen2.5-Coder-0.5-Instruct-abliterate/blob/main/LICENSE
quantized_by: mradermacher
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
- abliterated
- uncensored
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/huihui-ai/Qwen2.5-Coder-0.5B-Instruct-abliterated
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-Instruct-abliterated-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-Coder-0.5B-Instruct-abliterated.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-Coder-0.5B-Instruct-abliterated.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-Coder-0.5B-Instruct-abliterated.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-Coder-0.5B-Instruct-abliterated.Q4_0_4_4.gguf) | Q4_0_4_4 | 0.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-Coder-0.5B-Instruct-abliterated.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-Coder-0.5B-Instruct-abliterated.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-Coder-0.5B-Instruct-abliterated.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-Coder-0.5B-Instruct-abliterated.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-Coder-0.5B-Instruct-abliterated.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-Coder-0.5B-Instruct-abliterated.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-Coder-0.5B-Instruct-abliterated.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-Coder-0.5B-Instruct-abliterated.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-0.5B-Instruct-abliterated-GGUF/resolve/main/Qwen2.5-Coder-0.5B-Instruct-abliterated.f16.gguf) | f16 | 1.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
NathanS-HuggingFace/PyramidsRND | NathanS-HuggingFace | "2023-04-23T18:31:36Z" | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | "2023-04-23T18:29:28Z" | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: NathanS-HuggingFace/PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
lesso10/44a68ad9-3fa7-4114-a9ff-7c1ec1fbe5ca | lesso10 | "2025-03-16T11:22:07Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5",
"base_model:adapter:OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5",
"license:apache-2.0",
"region:us"
] | null | "2025-03-14T22:11:14Z" | ---
library_name: peft
license: apache-2.0
base_model: OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 44a68ad9-3fa7-4114-a9ff-7c1ec1fbe5ca
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# 44a68ad9-3fa7-4114-a9ff-7c1ec1fbe5ca
This model is a fine-tuned version of [OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5](https://huggingface.co/OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9062
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00021
- train_batch_size: 4
- eval_batch_size: 4
- seed: 100
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0014 | 1 | 1.5018 |
| 7.3288 | 0.6938 | 500 | 0.9062 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
joosma/ppo-v3 | joosma | "2024-05-21T10:40:39Z" | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | "2024-05-21T10:31:59Z" | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -151.06 +/- 77.67
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 1000000
'learning_rate': 0.0002
'num_envs': 20
'num_steps': 2048
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 10
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'joosma/ppo-v3'
'batch_size': 40960
'minibatch_size': 4096}
```
|
Yntec/photoMovieRealistic | Yntec | "2024-04-17T20:43:16Z" | 25,759 | 21 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"MagicArt35",
"Photorealistic",
"cinestill",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-08-05T07:31:21Z" | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
- MagicArt35
- Photorealistic
- cinestill
---
# Photo Movie Realistic
Original page:
https://civitai.com/models/95413/photo-movie-realistic |
MayBashendy/ArabicNewSplits7_B_usingALLEssays_FineTuningAraBERT_run2_AugV5_k12_task7_organization | MayBashendy | "2025-01-22T17:44:57Z" | 9 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-20T08:36:58Z" | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_B_usingALLEssays_FineTuningAraBERT_run2_AugV5_k12_task7_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_B_usingALLEssays_FineTuningAraBERT_run2_AugV5_k12_task7_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5891
- Qwk: 0.4856
- Mse: 0.5891
- Rmse: 0.7675
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0328 | 2 | 2.4902 | -0.0924 | 2.4902 | 1.5780 |
| No log | 0.0656 | 4 | 1.1697 | 0.1265 | 1.1697 | 1.0815 |
| No log | 0.0984 | 6 | 0.9190 | 0.0955 | 0.9190 | 0.9587 |
| No log | 0.1311 | 8 | 0.8454 | 0.0778 | 0.8454 | 0.9195 |
| No log | 0.1639 | 10 | 0.9588 | 0.0327 | 0.9588 | 0.9792 |
| No log | 0.1967 | 12 | 1.1822 | 0.0692 | 1.1822 | 1.0873 |
| No log | 0.2295 | 14 | 1.2553 | 0.0576 | 1.2553 | 1.1204 |
| No log | 0.2623 | 16 | 1.1568 | 0.0896 | 1.1568 | 1.0756 |
| No log | 0.2951 | 18 | 1.1175 | 0.1208 | 1.1175 | 1.0571 |
| No log | 0.3279 | 20 | 1.0352 | 0.0719 | 1.0352 | 1.0175 |
| No log | 0.3607 | 22 | 0.9616 | 0.1285 | 0.9616 | 0.9806 |
| No log | 0.3934 | 24 | 0.9525 | -0.0020 | 0.9525 | 0.9760 |
| No log | 0.4262 | 26 | 0.9570 | 0.1089 | 0.9570 | 0.9783 |
| No log | 0.4590 | 28 | 0.8598 | 0.2019 | 0.8598 | 0.9273 |
| No log | 0.4918 | 30 | 0.8008 | 0.1479 | 0.8008 | 0.8949 |
| No log | 0.5246 | 32 | 0.8052 | 0.1867 | 0.8052 | 0.8973 |
| No log | 0.5574 | 34 | 0.8922 | 0.2979 | 0.8922 | 0.9445 |
| No log | 0.5902 | 36 | 1.0731 | 0.0912 | 1.0731 | 1.0359 |
| No log | 0.6230 | 38 | 1.0655 | 0.0569 | 1.0655 | 1.0322 |
| No log | 0.6557 | 40 | 1.0214 | 0.1155 | 1.0214 | 1.0106 |
| No log | 0.6885 | 42 | 0.8687 | 0.1884 | 0.8687 | 0.9321 |
| No log | 0.7213 | 44 | 0.8696 | 0.2604 | 0.8696 | 0.9325 |
| No log | 0.7541 | 46 | 0.8819 | 0.1495 | 0.8819 | 0.9391 |
| No log | 0.7869 | 48 | 1.0503 | 0.0844 | 1.0503 | 1.0248 |
| No log | 0.8197 | 50 | 1.4484 | 0.0338 | 1.4484 | 1.2035 |
| No log | 0.8525 | 52 | 1.6178 | -0.0064 | 1.6178 | 1.2719 |
| No log | 0.8852 | 54 | 1.5125 | -0.0295 | 1.5125 | 1.2298 |
| No log | 0.9180 | 56 | 1.2326 | 0.0736 | 1.2326 | 1.1102 |
| No log | 0.9508 | 58 | 0.9777 | 0.0856 | 0.9777 | 0.9888 |
| No log | 0.9836 | 60 | 0.8506 | 0.0265 | 0.8506 | 0.9223 |
| No log | 1.0164 | 62 | 0.8394 | 0.1648 | 0.8394 | 0.9162 |
| No log | 1.0492 | 64 | 0.8565 | 0.2467 | 0.8565 | 0.9255 |
| No log | 1.0820 | 66 | 0.8502 | 0.2227 | 0.8502 | 0.9221 |
| No log | 1.1148 | 68 | 0.8507 | 0.0154 | 0.8507 | 0.9223 |
| No log | 1.1475 | 70 | 0.8663 | -0.0121 | 0.8663 | 0.9307 |
| No log | 1.1803 | 72 | 0.8656 | 0.0313 | 0.8656 | 0.9304 |
| No log | 1.2131 | 74 | 0.8620 | -0.0026 | 0.8620 | 0.9284 |
| No log | 1.2459 | 76 | 0.8471 | -0.0192 | 0.8471 | 0.9204 |
| No log | 1.2787 | 78 | 0.8671 | 0.2227 | 0.8671 | 0.9312 |
| No log | 1.3115 | 80 | 0.8559 | 0.0860 | 0.8559 | 0.9251 |
| No log | 1.3443 | 82 | 0.8519 | 0.0662 | 0.8519 | 0.9230 |
| No log | 1.3770 | 84 | 0.8458 | 0.0662 | 0.8458 | 0.9197 |
| No log | 1.4098 | 86 | 0.8240 | 0.1463 | 0.8240 | 0.9078 |
| No log | 1.4426 | 88 | 0.8378 | 0.1647 | 0.8378 | 0.9153 |
| No log | 1.4754 | 90 | 0.8512 | 0.2015 | 0.8512 | 0.9226 |
| No log | 1.5082 | 92 | 0.8797 | 0.1889 | 0.8797 | 0.9379 |
| No log | 1.5410 | 94 | 0.9150 | 0.2550 | 0.9150 | 0.9566 |
| No log | 1.5738 | 96 | 1.0033 | 0.1506 | 1.0033 | 1.0017 |
| No log | 1.6066 | 98 | 1.0439 | 0.1241 | 1.0439 | 1.0217 |
| No log | 1.6393 | 100 | 1.0377 | 0.1206 | 1.0377 | 1.0187 |
| No log | 1.6721 | 102 | 0.9512 | 0.2047 | 0.9512 | 0.9753 |
| No log | 1.7049 | 104 | 0.9200 | 0.2600 | 0.9200 | 0.9592 |
| No log | 1.7377 | 106 | 0.8501 | 0.3523 | 0.8501 | 0.9220 |
| No log | 1.7705 | 108 | 0.7985 | 0.3536 | 0.7985 | 0.8936 |
| No log | 1.8033 | 110 | 0.8327 | 0.3613 | 0.8327 | 0.9125 |
| No log | 1.8361 | 112 | 0.8063 | 0.2549 | 0.8063 | 0.8979 |
| No log | 1.8689 | 114 | 0.7273 | 0.3754 | 0.7273 | 0.8528 |
| No log | 1.9016 | 116 | 0.7235 | 0.3390 | 0.7235 | 0.8506 |
| No log | 1.9344 | 118 | 0.7376 | 0.3704 | 0.7376 | 0.8588 |
| No log | 1.9672 | 120 | 0.7652 | 0.3739 | 0.7652 | 0.8748 |
| No log | 2.0 | 122 | 0.7867 | 0.3378 | 0.7867 | 0.8870 |
| No log | 2.0328 | 124 | 0.7765 | 0.3941 | 0.7765 | 0.8812 |
| No log | 2.0656 | 126 | 0.7421 | 0.3785 | 0.7421 | 0.8614 |
| No log | 2.0984 | 128 | 0.7296 | 0.3930 | 0.7296 | 0.8542 |
| No log | 2.1311 | 130 | 0.7261 | 0.3542 | 0.7261 | 0.8521 |
| No log | 2.1639 | 132 | 0.7428 | 0.3829 | 0.7428 | 0.8619 |
| No log | 2.1967 | 134 | 0.7346 | 0.3936 | 0.7346 | 0.8571 |
| No log | 2.2295 | 136 | 0.7794 | 0.3392 | 0.7794 | 0.8828 |
| No log | 2.2623 | 138 | 0.7396 | 0.3683 | 0.7396 | 0.8600 |
| No log | 2.2951 | 140 | 0.7307 | 0.2884 | 0.7307 | 0.8548 |
| No log | 2.3279 | 142 | 0.7785 | 0.3988 | 0.7785 | 0.8823 |
| No log | 2.3607 | 144 | 0.8766 | 0.3559 | 0.8766 | 0.9363 |
| No log | 2.3934 | 146 | 0.8526 | 0.4396 | 0.8526 | 0.9233 |
| No log | 2.4262 | 148 | 0.8250 | 0.3896 | 0.8250 | 0.9083 |
| No log | 2.4590 | 150 | 0.9111 | 0.2382 | 0.9111 | 0.9545 |
| No log | 2.4918 | 152 | 1.0682 | 0.2125 | 1.0682 | 1.0336 |
| No log | 2.5246 | 154 | 1.0037 | 0.2725 | 1.0037 | 1.0018 |
| No log | 2.5574 | 156 | 0.8467 | 0.4140 | 0.8467 | 0.9201 |
| No log | 2.5902 | 158 | 0.8014 | 0.4165 | 0.8014 | 0.8952 |
| No log | 2.6230 | 160 | 0.7790 | 0.4051 | 0.7790 | 0.8826 |
| No log | 2.6557 | 162 | 0.7818 | 0.4122 | 0.7818 | 0.8842 |
| No log | 2.6885 | 164 | 0.7562 | 0.3937 | 0.7562 | 0.8696 |
| No log | 2.7213 | 166 | 0.7208 | 0.3590 | 0.7208 | 0.8490 |
| No log | 2.7541 | 168 | 0.7070 | 0.3474 | 0.7070 | 0.8408 |
| No log | 2.7869 | 170 | 0.7184 | 0.3950 | 0.7184 | 0.8476 |
| No log | 2.8197 | 172 | 0.7237 | 0.4006 | 0.7237 | 0.8507 |
| No log | 2.8525 | 174 | 0.7313 | 0.4006 | 0.7313 | 0.8552 |
| No log | 2.8852 | 176 | 0.7491 | 0.4190 | 0.7491 | 0.8655 |
| No log | 2.9180 | 178 | 0.7530 | 0.4 | 0.7530 | 0.8678 |
| No log | 2.9508 | 180 | 0.7624 | 0.2790 | 0.7624 | 0.8732 |
| No log | 2.9836 | 182 | 0.7472 | 0.3259 | 0.7472 | 0.8644 |
| No log | 3.0164 | 184 | 0.7450 | 0.3622 | 0.7450 | 0.8631 |
| No log | 3.0492 | 186 | 0.7529 | 0.3178 | 0.7529 | 0.8677 |
| No log | 3.0820 | 188 | 0.7530 | 0.3768 | 0.7530 | 0.8678 |
| No log | 3.1148 | 190 | 0.7403 | 0.3995 | 0.7403 | 0.8604 |
| No log | 3.1475 | 192 | 0.7237 | 0.4377 | 0.7237 | 0.8507 |
| No log | 3.1803 | 194 | 0.6817 | 0.4484 | 0.6817 | 0.8256 |
| No log | 3.2131 | 196 | 0.6923 | 0.4044 | 0.6923 | 0.8321 |
| No log | 3.2459 | 198 | 0.6984 | 0.3899 | 0.6984 | 0.8357 |
| No log | 3.2787 | 200 | 0.7266 | 0.4131 | 0.7266 | 0.8524 |
| No log | 3.3115 | 202 | 0.7679 | 0.4076 | 0.7679 | 0.8763 |
| No log | 3.3443 | 204 | 0.8034 | 0.3579 | 0.8034 | 0.8963 |
| No log | 3.3770 | 206 | 0.7668 | 0.3913 | 0.7668 | 0.8757 |
| No log | 3.4098 | 208 | 0.6841 | 0.4461 | 0.6841 | 0.8271 |
| No log | 3.4426 | 210 | 0.6649 | 0.4364 | 0.6649 | 0.8154 |
| No log | 3.4754 | 212 | 0.6682 | 0.4364 | 0.6682 | 0.8175 |
| No log | 3.5082 | 214 | 0.6804 | 0.4517 | 0.6804 | 0.8249 |
| No log | 3.5410 | 216 | 0.6831 | 0.4619 | 0.6831 | 0.8265 |
| No log | 3.5738 | 218 | 0.6922 | 0.4256 | 0.6922 | 0.8320 |
| No log | 3.6066 | 220 | 0.7142 | 0.3856 | 0.7142 | 0.8451 |
| No log | 3.6393 | 222 | 0.7414 | 0.3814 | 0.7414 | 0.8610 |
| No log | 3.6721 | 224 | 0.7548 | 0.4167 | 0.7548 | 0.8688 |
| No log | 3.7049 | 226 | 0.7467 | 0.3959 | 0.7467 | 0.8641 |
| No log | 3.7377 | 228 | 0.7122 | 0.3811 | 0.7122 | 0.8439 |
| No log | 3.7705 | 230 | 0.6875 | 0.4991 | 0.6875 | 0.8292 |
| No log | 3.8033 | 232 | 0.6649 | 0.3887 | 0.6649 | 0.8154 |
| No log | 3.8361 | 234 | 0.6579 | 0.3127 | 0.6579 | 0.8111 |
| No log | 3.8689 | 236 | 0.6335 | 0.3622 | 0.6335 | 0.7959 |
| No log | 3.9016 | 238 | 0.6206 | 0.3995 | 0.6206 | 0.7878 |
| No log | 3.9344 | 240 | 0.6729 | 0.3716 | 0.6729 | 0.8203 |
| No log | 3.9672 | 242 | 0.6821 | 0.3789 | 0.6821 | 0.8259 |
| No log | 4.0 | 244 | 0.6938 | 0.4461 | 0.6938 | 0.8330 |
| No log | 4.0328 | 246 | 0.7173 | 0.4736 | 0.7173 | 0.8470 |
| No log | 4.0656 | 248 | 0.7221 | 0.4032 | 0.7221 | 0.8497 |
| No log | 4.0984 | 250 | 0.7178 | 0.3935 | 0.7178 | 0.8472 |
| No log | 4.1311 | 252 | 0.6880 | 0.3979 | 0.6880 | 0.8294 |
| No log | 4.1639 | 254 | 0.6438 | 0.5104 | 0.6438 | 0.8024 |
| No log | 4.1967 | 256 | 0.6541 | 0.5235 | 0.6541 | 0.8087 |
| No log | 4.2295 | 258 | 0.6697 | 0.4562 | 0.6697 | 0.8183 |
| No log | 4.2623 | 260 | 0.6598 | 0.5037 | 0.6598 | 0.8123 |
| No log | 4.2951 | 262 | 0.6605 | 0.5087 | 0.6605 | 0.8127 |
| No log | 4.3279 | 264 | 0.7092 | 0.3829 | 0.7092 | 0.8422 |
| No log | 4.3607 | 266 | 0.6655 | 0.4261 | 0.6655 | 0.8158 |
| No log | 4.3934 | 268 | 0.6256 | 0.5110 | 0.6256 | 0.7910 |
| No log | 4.4262 | 270 | 0.5963 | 0.5565 | 0.5963 | 0.7722 |
| No log | 4.4590 | 272 | 0.5886 | 0.6344 | 0.5886 | 0.7672 |
| No log | 4.4918 | 274 | 0.6178 | 0.4518 | 0.6178 | 0.7860 |
| No log | 4.5246 | 276 | 0.6307 | 0.4747 | 0.6307 | 0.7941 |
| No log | 4.5574 | 278 | 0.5806 | 0.5826 | 0.5806 | 0.7619 |
| No log | 4.5902 | 280 | 0.5689 | 0.5584 | 0.5689 | 0.7543 |
| No log | 4.6230 | 282 | 0.5862 | 0.5586 | 0.5862 | 0.7656 |
| No log | 4.6557 | 284 | 0.6123 | 0.5736 | 0.6123 | 0.7825 |
| No log | 4.6885 | 286 | 0.6026 | 0.5966 | 0.6026 | 0.7762 |
| No log | 4.7213 | 288 | 0.5986 | 0.5767 | 0.5986 | 0.7737 |
| No log | 4.7541 | 290 | 0.6368 | 0.5687 | 0.6368 | 0.7980 |
| No log | 4.7869 | 292 | 0.6227 | 0.5420 | 0.6227 | 0.7891 |
| No log | 4.8197 | 294 | 0.5870 | 0.6059 | 0.5870 | 0.7662 |
| No log | 4.8525 | 296 | 0.5815 | 0.6161 | 0.5815 | 0.7626 |
| No log | 4.8852 | 298 | 0.5860 | 0.5728 | 0.5860 | 0.7655 |
| No log | 4.9180 | 300 | 0.6392 | 0.5421 | 0.6392 | 0.7995 |
| No log | 4.9508 | 302 | 0.6113 | 0.5205 | 0.6113 | 0.7818 |
| No log | 4.9836 | 304 | 0.5658 | 0.5507 | 0.5658 | 0.7522 |
| No log | 5.0164 | 306 | 0.5996 | 0.5501 | 0.5996 | 0.7743 |
| No log | 5.0492 | 308 | 0.6001 | 0.5501 | 0.6001 | 0.7746 |
| No log | 5.0820 | 310 | 0.5434 | 0.5440 | 0.5434 | 0.7371 |
| No log | 5.1148 | 312 | 0.6160 | 0.5849 | 0.6160 | 0.7848 |
| No log | 5.1475 | 314 | 0.6867 | 0.5281 | 0.6867 | 0.8287 |
| No log | 5.1803 | 316 | 0.6202 | 0.5328 | 0.6202 | 0.7875 |
| No log | 5.2131 | 318 | 0.5331 | 0.5702 | 0.5331 | 0.7302 |
| No log | 5.2459 | 320 | 0.5318 | 0.5398 | 0.5318 | 0.7293 |
| No log | 5.2787 | 322 | 0.5432 | 0.5718 | 0.5432 | 0.7370 |
| No log | 5.3115 | 324 | 0.6199 | 0.5003 | 0.6199 | 0.7873 |
| No log | 5.3443 | 326 | 0.5754 | 0.5922 | 0.5754 | 0.7586 |
| No log | 5.3770 | 328 | 0.5133 | 0.5826 | 0.5133 | 0.7164 |
| No log | 5.4098 | 330 | 0.5112 | 0.5234 | 0.5112 | 0.7150 |
| No log | 5.4426 | 332 | 0.5233 | 0.6034 | 0.5233 | 0.7234 |
| No log | 5.4754 | 334 | 0.6026 | 0.5190 | 0.6026 | 0.7762 |
| No log | 5.5082 | 336 | 0.6505 | 0.5205 | 0.6505 | 0.8066 |
| No log | 5.5410 | 338 | 0.6270 | 0.5312 | 0.6270 | 0.7918 |
| No log | 5.5738 | 340 | 0.6290 | 0.5355 | 0.6290 | 0.7931 |
| No log | 5.6066 | 342 | 0.6226 | 0.5180 | 0.6226 | 0.7891 |
| No log | 5.6393 | 344 | 0.6490 | 0.4946 | 0.6490 | 0.8056 |
| No log | 5.6721 | 346 | 0.7425 | 0.4946 | 0.7425 | 0.8617 |
| No log | 5.7049 | 348 | 0.8865 | 0.4134 | 0.8865 | 0.9415 |
| No log | 5.7377 | 350 | 0.9062 | 0.4092 | 0.9062 | 0.9519 |
| No log | 5.7705 | 352 | 0.7650 | 0.4703 | 0.7650 | 0.8747 |
| No log | 5.8033 | 354 | 0.5902 | 0.5313 | 0.5902 | 0.7682 |
| No log | 5.8361 | 356 | 0.5394 | 0.5420 | 0.5394 | 0.7345 |
| No log | 5.8689 | 358 | 0.5459 | 0.6233 | 0.5459 | 0.7388 |
| No log | 5.9016 | 360 | 0.5926 | 0.4898 | 0.5926 | 0.7698 |
| No log | 5.9344 | 362 | 0.6716 | 0.4946 | 0.6716 | 0.8195 |
| No log | 5.9672 | 364 | 0.7551 | 0.4562 | 0.7551 | 0.8689 |
| No log | 6.0 | 366 | 0.8228 | 0.3847 | 0.8228 | 0.9071 |
| No log | 6.0328 | 368 | 0.8066 | 0.3847 | 0.8066 | 0.8981 |
| No log | 6.0656 | 370 | 0.7227 | 0.4756 | 0.7227 | 0.8501 |
| No log | 6.0984 | 372 | 0.6287 | 0.4761 | 0.6287 | 0.7929 |
| No log | 6.1311 | 374 | 0.6030 | 0.4833 | 0.6030 | 0.7765 |
| No log | 6.1639 | 376 | 0.6078 | 0.4783 | 0.6078 | 0.7796 |
| No log | 6.1967 | 378 | 0.5938 | 0.4981 | 0.5938 | 0.7706 |
| No log | 6.2295 | 380 | 0.6053 | 0.4800 | 0.6053 | 0.7780 |
| No log | 6.2623 | 382 | 0.5790 | 0.5267 | 0.5790 | 0.7609 |
| No log | 6.2951 | 384 | 0.5753 | 0.5250 | 0.5753 | 0.7585 |
| No log | 6.3279 | 386 | 0.5728 | 0.5037 | 0.5728 | 0.7568 |
| No log | 6.3607 | 388 | 0.5811 | 0.5037 | 0.5811 | 0.7623 |
| No log | 6.3934 | 390 | 0.6219 | 0.4864 | 0.6219 | 0.7886 |
| No log | 6.4262 | 392 | 0.7206 | 0.4860 | 0.7206 | 0.8489 |
| No log | 6.4590 | 394 | 0.7026 | 0.4521 | 0.7026 | 0.8382 |
| No log | 6.4918 | 396 | 0.5809 | 0.5647 | 0.5809 | 0.7621 |
| No log | 6.5246 | 398 | 0.6073 | 0.4783 | 0.6073 | 0.7793 |
| No log | 6.5574 | 400 | 0.6538 | 0.5013 | 0.6538 | 0.8086 |
| No log | 6.5902 | 402 | 0.5950 | 0.4997 | 0.5950 | 0.7714 |
| No log | 6.6230 | 404 | 0.5649 | 0.6078 | 0.5649 | 0.7516 |
| No log | 6.6557 | 406 | 0.5763 | 0.5947 | 0.5763 | 0.7592 |
| No log | 6.6885 | 408 | 0.5599 | 0.5476 | 0.5599 | 0.7483 |
| No log | 6.7213 | 410 | 0.5756 | 0.4918 | 0.5756 | 0.7587 |
| No log | 6.7541 | 412 | 0.6884 | 0.4580 | 0.6884 | 0.8297 |
| No log | 6.7869 | 414 | 0.8219 | 0.4250 | 0.8219 | 0.9066 |
| No log | 6.8197 | 416 | 0.8650 | 0.3652 | 0.8650 | 0.9301 |
| No log | 6.8525 | 418 | 0.7458 | 0.4987 | 0.7458 | 0.8636 |
| No log | 6.8852 | 420 | 0.6326 | 0.5679 | 0.6326 | 0.7954 |
| No log | 6.9180 | 422 | 0.6981 | 0.5186 | 0.6981 | 0.8355 |
| No log | 6.9508 | 424 | 0.7431 | 0.4698 | 0.7431 | 0.8620 |
| No log | 6.9836 | 426 | 0.6644 | 0.5555 | 0.6644 | 0.8151 |
| No log | 7.0164 | 428 | 0.5999 | 0.5334 | 0.5999 | 0.7745 |
| No log | 7.0492 | 430 | 0.6295 | 0.5392 | 0.6295 | 0.7934 |
| No log | 7.0820 | 432 | 0.7937 | 0.4087 | 0.7937 | 0.8909 |
| No log | 7.1148 | 434 | 0.9018 | 0.3066 | 0.9018 | 0.9496 |
| No log | 7.1475 | 436 | 0.9169 | 0.3233 | 0.9169 | 0.9575 |
| No log | 7.1803 | 438 | 0.8080 | 0.3890 | 0.8080 | 0.8989 |
| No log | 7.2131 | 440 | 0.7276 | 0.5185 | 0.7276 | 0.8530 |
| No log | 7.2459 | 442 | 0.6817 | 0.5247 | 0.6817 | 0.8256 |
| No log | 7.2787 | 444 | 0.6517 | 0.5274 | 0.6517 | 0.8073 |
| No log | 7.3115 | 446 | 0.6322 | 0.5274 | 0.6322 | 0.7951 |
| No log | 7.3443 | 448 | 0.6209 | 0.5243 | 0.6209 | 0.7880 |
| No log | 7.3770 | 450 | 0.6725 | 0.4971 | 0.6725 | 0.8201 |
| No log | 7.4098 | 452 | 0.7799 | 0.4186 | 0.7799 | 0.8831 |
| No log | 7.4426 | 454 | 0.8270 | 0.4159 | 0.8270 | 0.9094 |
| No log | 7.4754 | 456 | 0.8086 | 0.4098 | 0.8086 | 0.8992 |
| No log | 7.5082 | 458 | 0.7887 | 0.4098 | 0.7887 | 0.8881 |
| No log | 7.5410 | 460 | 0.7181 | 0.4598 | 0.7181 | 0.8474 |
| No log | 7.5738 | 462 | 0.6632 | 0.4747 | 0.6632 | 0.8144 |
| No log | 7.6066 | 464 | 0.6260 | 0.4711 | 0.6260 | 0.7912 |
| No log | 7.6393 | 466 | 0.6077 | 0.4727 | 0.6077 | 0.7796 |
| No log | 7.6721 | 468 | 0.5954 | 0.4692 | 0.5954 | 0.7716 |
| No log | 7.7049 | 470 | 0.6316 | 0.4882 | 0.6316 | 0.7947 |
| No log | 7.7377 | 472 | 0.6822 | 0.4344 | 0.6822 | 0.8259 |
| No log | 7.7705 | 474 | 0.7153 | 0.4427 | 0.7153 | 0.8458 |
| No log | 7.8033 | 476 | 0.6673 | 0.4535 | 0.6673 | 0.8169 |
| No log | 7.8361 | 478 | 0.5830 | 0.5941 | 0.5830 | 0.7636 |
| No log | 7.8689 | 480 | 0.5590 | 0.6553 | 0.5590 | 0.7476 |
| No log | 7.9016 | 482 | 0.5719 | 0.6256 | 0.5719 | 0.7562 |
| No log | 7.9344 | 484 | 0.5958 | 0.5524 | 0.5958 | 0.7719 |
| No log | 7.9672 | 486 | 0.6364 | 0.5251 | 0.6364 | 0.7977 |
| No log | 8.0 | 488 | 0.6562 | 0.5061 | 0.6562 | 0.8101 |
| No log | 8.0328 | 490 | 0.6597 | 0.5061 | 0.6597 | 0.8122 |
| No log | 8.0656 | 492 | 0.5925 | 0.5061 | 0.5925 | 0.7698 |
| No log | 8.0984 | 494 | 0.5488 | 0.5877 | 0.5488 | 0.7408 |
| No log | 8.1311 | 496 | 0.5478 | 0.4866 | 0.5478 | 0.7401 |
| No log | 8.1639 | 498 | 0.5610 | 0.4418 | 0.5610 | 0.7490 |
| 0.3558 | 8.1967 | 500 | 0.5716 | 0.5463 | 0.5716 | 0.7560 |
| 0.3558 | 8.2295 | 502 | 0.5989 | 0.4935 | 0.5989 | 0.7739 |
| 0.3558 | 8.2623 | 504 | 0.6117 | 0.5090 | 0.6117 | 0.7821 |
| 0.3558 | 8.2951 | 506 | 0.5893 | 0.5087 | 0.5893 | 0.7676 |
| 0.3558 | 8.3279 | 508 | 0.5853 | 0.4973 | 0.5853 | 0.7650 |
| 0.3558 | 8.3607 | 510 | 0.5891 | 0.4856 | 0.5891 | 0.7675 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
Subsets and Splits