Search is not available for this dataset
modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-11 12:28:23
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 420
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-11 12:28:05
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
darrenbudiman/TennisCommentaryBitsNBytes | darrenbudiman | "2025-03-26T12:32:13Z" | 33 | 0 | null | [
"safetensors",
"florence2",
"custom_code",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-03-15T14:24:42Z" | ---
license: apache-2.0
---
|
RichardErkhov/vilm_-_Quyen-Plus-v0.1-8bits | RichardErkhov | "2025-02-12T00:04:52Z" | 0 | 0 | null | [
"safetensors",
"qwen2",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-11T23:59:11Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Quyen-Plus-v0.1 - bnb 8bits
- Model creator: https://huggingface.co/vilm/
- Original model: https://huggingface.co/vilm/Quyen-Plus-v0.1/
Original model description:
---
language:
- en
license: other
library_name: transformers
datasets:
- teknium/OpenHermes-2.5
- LDJnr/Capybara
- Intel/orca_dpo_pairs
- argilla/distilabel-capybara-dpo-7k-binarized
pipeline_tag: text-generation
model-index:
- name: Quyen-Plus-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 55.72
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vilm/Quyen-Plus-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 78.52
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vilm/Quyen-Plus-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.45
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vilm/Quyen-Plus-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 53.6
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vilm/Quyen-Plus-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 71.27
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vilm/Quyen-Plus-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.05
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vilm/Quyen-Plus-v0.1
name: Open LLM Leaderboard
---
# Quyen
<img src="quyen.webp" width="512" height="512" alt="Quyen">
# Model Description
Quyen is our first flagship LLM series based on the Qwen1.5 family. We introduced 6 different versions:
- **Quyen-SE (0.5B)**
- **Quyen-Mini (1.8B)**
- **Quyen (4B)**
- **Quyen-Plus (7B)**
- **Quyen-Pro (14B)**
- **Quyen-Pro-Max (72B)**
All models were trained with SFT and DPO using the following dataset:
- *OpenHermes-2.5* by **Teknium**
- *Capyabara* by **LDJ**
- *argilla/distilabel-capybara-dpo-7k-binarized* by **argilla**
- *orca_dpo_pairs* by **Intel**
- and Private Data by **Ontocord** & **BEE-spoke-data**
# Prompt Template
- All Quyen models use ChatML as the default template:
```
<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Hello world.<|im_end|>
<|im_start|>assistant
```
- You can also use `apply_chat_template`:
```python
messages = [
{"role": "system", "content": "You are a sentient, superintelligent artificial general intelligence, here to teach and assist me."},
{"role": "user", "content": "Hello world."}
]
gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
model.generate(**gen_input)
```
# Benchmarks:
- Coming Soon! We will update the benchmarks later
# Acknowledgement
- We're incredibly grateful to **Tensoic** and **Ontocord** for their generous support with compute and data preparation.
- Special thanks to the Qwen team for letting us access the models early for these amazing finetunes.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vilm__Quyen-Plus-v0.1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |63.27|
|AI2 Reasoning Challenge (25-Shot)|55.72|
|HellaSwag (10-Shot) |78.52|
|MMLU (5-Shot) |60.45|
|TruthfulQA (0-shot) |53.60|
|Winogrande (5-shot) |71.27|
|GSM8k (5-shot) |60.05|
|
EleutherAI/pythia-2.8b-nli-first-ft | EleutherAI | "2024-03-22T18:25:42Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-16T01:44:33Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/MN-12B-Mimicore-Orochi-i1-GGUF | mradermacher | "2025-02-26T07:00:21Z" | 643 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:DoppelReflEx/MN-12B-Mimicore-Orochi",
"base_model:quantized:DoppelReflEx/MN-12B-Mimicore-Orochi",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-01-29T15:46:00Z" | ---
base_model: DoppelReflEx/MN-12B-Mimicore-Orochi
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/DoppelReflEx/MN-12B-Mimicore-Orochi
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/MN-12B-Mimicore-Orochi-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-Orochi-i1-GGUF/resolve/main/MN-12B-Mimicore-Orochi.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-Orochi-i1-GGUF/resolve/main/MN-12B-Mimicore-Orochi.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-Orochi-i1-GGUF/resolve/main/MN-12B-Mimicore-Orochi.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-Orochi-i1-GGUF/resolve/main/MN-12B-Mimicore-Orochi.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-Orochi-i1-GGUF/resolve/main/MN-12B-Mimicore-Orochi.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-Orochi-i1-GGUF/resolve/main/MN-12B-Mimicore-Orochi.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-Orochi-i1-GGUF/resolve/main/MN-12B-Mimicore-Orochi.i1-Q2_K_S.gguf) | i1-Q2_K_S | 4.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-Orochi-i1-GGUF/resolve/main/MN-12B-Mimicore-Orochi.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-Orochi-i1-GGUF/resolve/main/MN-12B-Mimicore-Orochi.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-Orochi-i1-GGUF/resolve/main/MN-12B-Mimicore-Orochi.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-Orochi-i1-GGUF/resolve/main/MN-12B-Mimicore-Orochi.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-Orochi-i1-GGUF/resolve/main/MN-12B-Mimicore-Orochi.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-Orochi-i1-GGUF/resolve/main/MN-12B-Mimicore-Orochi.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-Orochi-i1-GGUF/resolve/main/MN-12B-Mimicore-Orochi.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-Orochi-i1-GGUF/resolve/main/MN-12B-Mimicore-Orochi.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-Orochi-i1-GGUF/resolve/main/MN-12B-Mimicore-Orochi.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-Orochi-i1-GGUF/resolve/main/MN-12B-Mimicore-Orochi.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-Orochi-i1-GGUF/resolve/main/MN-12B-Mimicore-Orochi.i1-IQ4_NL.gguf) | i1-IQ4_NL | 7.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-Orochi-i1-GGUF/resolve/main/MN-12B-Mimicore-Orochi.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-Orochi-i1-GGUF/resolve/main/MN-12B-Mimicore-Orochi.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-Orochi-i1-GGUF/resolve/main/MN-12B-Mimicore-Orochi.i1-Q4_1.gguf) | i1-Q4_1 | 7.9 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-Orochi-i1-GGUF/resolve/main/MN-12B-Mimicore-Orochi.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-Orochi-i1-GGUF/resolve/main/MN-12B-Mimicore-Orochi.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Mimicore-Orochi-i1-GGUF/resolve/main/MN-12B-Mimicore-Orochi.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
blerimceka/MyFirstAI | blerimceka | "2024-01-12T21:10:11Z" | 0 | 0 | null | [
"arxiv:1910.09700",
"region:us"
] | null | "2024-01-12T21:00:01Z" | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
flowerpot76/Pixelcopter-PLE-v0 | flowerpot76 | "2023-04-24T06:51:50Z" | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-04-24T06:51:45Z" | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 57.80 +/- 42.36
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
PrunaAI/Qwen-Qwen2-Math-7B-HQQ-2bit-smashed | PrunaAI | "2024-08-16T20:34:32Z" | 9 | 0 | null | [
"qwen2",
"pruna-ai",
"base_model:Qwen/Qwen2-Math-7B",
"base_model:finetune:Qwen/Qwen2-Math-7B",
"region:us"
] | null | "2024-08-16T20:32:41Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: Qwen/Qwen2-Math-7B
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo Qwen/Qwen2-Math-7B installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/Qwen-Qwen2-Math-7B-HQQ-2bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/Qwen-Qwen2-Math-7B-HQQ-2bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-Math-7B")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model Qwen/Qwen2-Math-7B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
YakovElm/Apache5Classic_256 | YakovElm | "2023-05-24T23:20:16Z" | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-05-24T23:19:35Z" | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache5Classic_256
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache5Classic_256
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2678
- Train Accuracy: 0.9131
- Validation Loss: 0.5122
- Validation Accuracy: 0.8194
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3098 | 0.9031 | 0.5071 | 0.8233 | 0 |
| 0.2939 | 0.9105 | 0.4952 | 0.8233 | 1 |
| 0.2678 | 0.9131 | 0.5122 | 0.8194 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
midnightGlow/flant5-xlsum-bangla-source-domain | midnightGlow | "2024-11-29T20:09:33Z" | 9 | 0 | null | [
"safetensors",
"t5",
"summarization",
"bn",
"base_model:midnightGlow/flant5_xlsum_bangla",
"base_model:finetune:midnightGlow/flant5_xlsum_bangla",
"region:us"
] | summarization | "2024-11-29T19:09:37Z" | ---
language:
- bn
metrics:
- bleu
- rouge
base_model:
- midnightGlow/flant5_xlsum_bangla
pipeline_tag: summarization
--- |
Litzy619/G0515HMA14H | Litzy619 | "2024-05-16T02:23:46Z" | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:google/gemma-2b",
"base_model:finetune:google/gemma-2b",
"license:gemma",
"region:us"
] | null | "2024-05-15T19:14:54Z" | ---
license: gemma
base_model: google/gemma-2b
tags:
- generated_from_trainer
model-index:
- name: G0515HMA14H
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# G0515HMA14H
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1189
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.2098 | 0.09 | 10 | 2.8516 |
| 2.5967 | 0.18 | 20 | 2.1281 |
| 1.6718 | 0.27 | 30 | 1.1013 |
| 0.6829 | 0.36 | 40 | 0.2924 |
| 0.2107 | 0.45 | 50 | 0.1645 |
| 0.1575 | 0.54 | 60 | 0.1545 |
| 0.1513 | 0.63 | 70 | 0.1509 |
| 0.1525 | 0.73 | 80 | 0.1491 |
| 0.145 | 0.82 | 90 | 0.1503 |
| 0.1484 | 0.91 | 100 | 0.1484 |
| 0.15 | 1.0 | 110 | 0.1488 |
| 0.1437 | 1.09 | 120 | 0.1483 |
| 0.145 | 1.18 | 130 | 0.1479 |
| 0.1459 | 1.27 | 140 | 0.1469 |
| 0.1487 | 1.36 | 150 | 0.1454 |
| 0.1418 | 1.45 | 160 | 0.1483 |
| 0.1435 | 1.54 | 170 | 0.1437 |
| 0.143 | 1.63 | 180 | 0.1416 |
| 0.1423 | 1.72 | 190 | 0.1391 |
| 0.1375 | 1.81 | 200 | 0.1352 |
| 0.138 | 1.9 | 210 | 0.1349 |
| 0.1345 | 1.99 | 220 | 0.1269 |
| 0.1297 | 2.08 | 230 | 0.1297 |
| 0.1267 | 2.18 | 240 | 0.1294 |
| 0.1264 | 2.27 | 250 | 0.1276 |
| 0.1255 | 2.36 | 260 | 0.1256 |
| 0.1249 | 2.45 | 270 | 0.1238 |
| 0.1191 | 2.54 | 280 | 0.1222 |
| 0.1169 | 2.63 | 290 | 0.1207 |
| 0.1163 | 2.72 | 300 | 0.1199 |
| 0.1199 | 2.81 | 310 | 0.1190 |
| 0.1226 | 2.9 | 320 | 0.1190 |
| 0.1203 | 2.99 | 330 | 0.1189 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.0
|
liquidrichard/p2 | liquidrichard | "2025-02-13T16:37:11Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-13T16:33:35Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
CyberHarem/rapunzel_nikke | CyberHarem | "2023-08-06T03:21:45Z" | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/rapunzel_nikke",
"license:mit",
"region:us"
] | text-to-image | "2023-08-06T03:18:06Z" | ---
license: mit
datasets:
- CyberHarem/rapunzel_nikke
pipeline_tag: text-to-image
tags:
- art
---
# Lora of rapunzel_nikke
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/rapunzel_nikke.pt` as the embedding and `1500/rapunzel_nikke.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `rapunzel_nikke`.**
These are available steps:
| Steps | bikini | free | nude | Download |
|--------:|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------|
| 1500 |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/rapunzel_nikke.zip) |
| 1400 |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/rapunzel_nikke.zip) |
| 1300 |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/rapunzel_nikke.zip) |
| 1200 |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/rapunzel_nikke.zip) |
| 1100 |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/rapunzel_nikke.zip) |
| 1000 |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/rapunzel_nikke.zip) |
| 900 |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/rapunzel_nikke.zip) |
| 800 |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/rapunzel_nikke.zip) |
| 700 |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/rapunzel_nikke.zip) |
| 600 |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/rapunzel_nikke.zip) |
| 500 |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/rapunzel_nikke.zip) |
| 400 |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/rapunzel_nikke.zip) |
| 300 |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/rapunzel_nikke.zip) |
| 200 |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/rapunzel_nikke.zip) |
| 100 |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/rapunzel_nikke.zip) |
|
thesisabc/segformer-b0-finetuned-segments-sidewalk-2 | thesisabc | "2023-06-22T04:01:52Z" | 160 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"dataset:segments/sidewalk-semantic",
"arxiv:2105.15203",
"endpoints_compatible",
"region:us"
] | image-segmentation | "2023-06-21T05:10:21Z" | ---
tags:
- vision
- image-segmentation
datasets:
- segments/sidewalk-semantic
widget:
- src: https://segmentsai-prod.s3.eu-west-2.amazonaws.com/assets/admin-tobias/439f6843-80c5-47ce-9b17-0b2a1d54dbeb.jpg
example_title: Brugge
---
# SegFormer (b0-sized) model fine-tuned on Segments.ai sidewalk-semantic.
SegFormer model fine-tuned on [Segments.ai](https://segments.ai) [`sidewalk-semantic`](https://huggingface.co/datasets/segments/sidewalk-semantic). It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
### How to use
Here is how to use this model to classify an image of the sidewalk dataset:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512")
model = SegformerForSemanticSegmentation.from_pretrained("segments-tobias/segformer-b0-finetuned-segments-sidewalk")
url = "https://segmentsai-prod.s3.eu-west-2.amazonaws.com/assets/admin-tobias/439f6843-80c5-47ce-9b17-0b2a1d54dbeb.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
SergioPerezBkanco/mi-super-modelo | SergioPerezBkanco | "2024-01-11T10:52:50Z" | 175 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-01-04T18:11:15Z" | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: mi-super-modelo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mi-super-modelo
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cpu
- Datasets 2.16.1
- Tokenizers 0.15.0
|
NikolayKozloff/ARWKV-R1-7B-Q8_0-GGUF | NikolayKozloff | "2025-03-23T19:44:51Z" | 0 | 1 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"zh",
"base_model:RWKV-Red-Team/ARWKV-R1-7B",
"base_model:quantized:RWKV-Red-Team/ARWKV-R1-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-03-23T19:44:13Z" | ---
base_model: RWKV-Red-Team/ARWKV-R1-7B
language:
- en
- zh
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/ARWKV-R1-7B-Q8_0-GGUF
This model was converted to GGUF format from [`RWKV-Red-Team/ARWKV-R1-7B`](https://huggingface.co/RWKV-Red-Team/ARWKV-R1-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/RWKV-Red-Team/ARWKV-R1-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/ARWKV-R1-7B-Q8_0-GGUF --hf-file arwkv-r1-7b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/ARWKV-R1-7B-Q8_0-GGUF --hf-file arwkv-r1-7b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/ARWKV-R1-7B-Q8_0-GGUF --hf-file arwkv-r1-7b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/ARWKV-R1-7B-Q8_0-GGUF --hf-file arwkv-r1-7b-q8_0.gguf -c 2048
```
|
growpark/llama381binstruct_summarize_short | growpark | "2025-03-28T06:09:04Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:NousResearch/Meta-Llama-3.1-8B-Instruct",
"base_model:finetune:NousResearch/Meta-Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | "2025-03-28T06:07:32Z" | ---
base_model: NousResearch/Meta-Llama-3.1-8B-Instruct
library_name: transformers
model_name: llama381binstruct_summarize_short
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for llama381binstruct_summarize_short
This model is a fine-tuned version of [NousResearch/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="growpark/llama381binstruct_summarize_short", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/timeintelligences-cognify/huggingface/runs/ryfsok06)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0
- Transformers: 4.50.2
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
beyoru/Calling | beyoru | "2025-04-06T09:40:19Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-06T09:34:06Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/CAI-Supernova-r2-GGUF | mradermacher | "2025-03-19T15:08:55Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"base_model:Edens-Gate/CAI-Supernova-r2",
"base_model:quantized:Edens-Gate/CAI-Supernova-r2",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-19T14:44:02Z" | ---
base_model: Edens-Gate/CAI-Supernova-r2
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Edens-Gate/CAI-Supernova-r2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/CAI-Supernova-r2-GGUF/resolve/main/CAI-Supernova-r2.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/CAI-Supernova-r2-GGUF/resolve/main/CAI-Supernova-r2.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/CAI-Supernova-r2-GGUF/resolve/main/CAI-Supernova-r2.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/CAI-Supernova-r2-GGUF/resolve/main/CAI-Supernova-r2.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/CAI-Supernova-r2-GGUF/resolve/main/CAI-Supernova-r2.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/CAI-Supernova-r2-GGUF/resolve/main/CAI-Supernova-r2.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CAI-Supernova-r2-GGUF/resolve/main/CAI-Supernova-r2.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CAI-Supernova-r2-GGUF/resolve/main/CAI-Supernova-r2.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/CAI-Supernova-r2-GGUF/resolve/main/CAI-Supernova-r2.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/CAI-Supernova-r2-GGUF/resolve/main/CAI-Supernova-r2.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/CAI-Supernova-r2-GGUF/resolve/main/CAI-Supernova-r2.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/CAI-Supernova-r2-GGUF/resolve/main/CAI-Supernova-r2.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
WachirawitV/Custom-Test-T-resnet18-model | WachirawitV | "2024-06-05T03:16:09Z" | 79 | 0 | transformers | [
"transformers",
"safetensors",
"pytorch_model_hub_mixin",
"model_hub_mixin",
"endpoints_compatible",
"region:us"
] | null | "2024-06-05T03:03:58Z" | ---
tags:
- pytorch_model_hub_mixin
- model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
opendiffusion/sentimentcheck | opendiffusion | "2023-07-10T16:58:49Z" | 0 | 0 | tf-keras | [
"tf-keras",
"bert",
"region:us"
] | null | "2023-05-11T18:26:04Z" |
# Intro
OpenDiffusion's SentimentCheck is an AI model built upon Tensorflow+Keras+Pickles. SentimentCheck harnesses the power of deep learning algorithms to accurately classify sentiment in text, making it a flexible tool for businesses, researchers, and developers.
## Usage
---
language:
- en
- nl
- de
- fr
- it
- es
license: mit
---
# bert-base-multilingual-uncased-sentiment
This a bert-base-multilingual-uncased model finetuned for sentiment analysis on product reviews in six languages: English, Dutch, German, French, Spanish and Italian. It predicts the sentiment of the review as a number of stars (between 1 and 5).
This model is intended for direct use as a sentiment analysis model for product reviews in any of the six languages above, or for further finetuning on related sentiment analysis tasks.
## Training data
Here is the number of product reviews we used for finetuning the model:
| Language | Number of reviews |
| -------- | ----------------- |
| English | 150k |
| Dutch | 80k |
| German | 137k |
| French | 140k |
| Italian | 72k |
| Spanish | 50k |
## Accuracy
The finetuned model obtained the following accuracy on 5,000 held-out product reviews in each of the languages:
- Accuracy (exact) is the exact match on the number of stars.
- Accuracy (off-by-1) is the percentage of reviews where the number of stars the model predicts differs by a maximum of 1 from the number given by the human reviewer.
| Language | Accuracy (exact) | Accuracy (off-by-1) |
| -------- | ---------------------- | ------------------- |
| English | 67% | 95%
| Dutch | 57% | 93%
| German | 61% | 94%
| French | 59% | 94%
| Italian | 59% | 95%
| Spanish | 58% | 95%
|
infogeo/dfc1b2e9-8e02-48b3-a847-134fa5ee1019 | infogeo | "2025-01-23T00:51:49Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Mistral-7b-64k",
"base_model:adapter:NousResearch/Yarn-Mistral-7b-64k",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-22T22:22:28Z" | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Mistral-7b-64k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: dfc1b2e9-8e02-48b3-a847-134fa5ee1019
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Mistral-7b-64k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c4f5aa689cea73ab_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c4f5aa689cea73ab_train_data.json
type:
field_instruction: article
field_output: question
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: false
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: infogeo/dfc1b2e9-8e02-48b3-a847-134fa5ee1019
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 79GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/c4f5aa689cea73ab_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 15d339b7-6cfa-46d5-bc15-19d8e91fb296
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 15d339b7-6cfa-46d5-bc15-19d8e91fb296
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# dfc1b2e9-8e02-48b3-a847-134fa5ee1019
This model is a fine-tuned version of [NousResearch/Yarn-Mistral-7b-64k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-64k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8439
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | 1.9041 |
| 7.189 | 0.0014 | 5 | 1.8823 |
| 7.5602 | 0.0028 | 10 | 1.8621 |
| 7.7457 | 0.0042 | 15 | 1.8519 |
| 7.9226 | 0.0056 | 20 | 1.8473 |
| 7.0549 | 0.0070 | 25 | 1.8448 |
| 6.9774 | 0.0084 | 30 | 1.8439 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
bibuai/pro_pijamas_dodgers2 | bibuai | "2025-03-10T18:46:24Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-03-10T18:36:12Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: pro_pijamas_dodgers2
---
# Pro_Pijamas_Dodgers2
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `pro_pijamas_dodgers2` to trigger the image generation.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('bibuai/pro_pijamas_dodgers2', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
ihughes15234/phi35_pd_dpo10epoch_1200 | ihughes15234 | "2024-11-13T22:16:28Z" | 84 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:ihughes15234/phi_3_5_mini_3k_each",
"base_model:finetune:ihughes15234/phi_3_5_mini_3k_each",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-13T22:00:44Z" | ---
base_model: ihughes15234/phi_3_5_mini_3k_each
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ihughes15234
- **License:** apache-2.0
- **Finetuned from model :** ihughes15234/phi_3_5_mini_3k_each
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
gaudi/opus-mt-wa-en-ctranslate2 | gaudi | "2024-10-18T22:55:37Z" | 10 | 0 | transformers | [
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | translation | "2024-07-17T00:17:48Z" | ---
tags:
- ctranslate2
- translation
license: apache-2.0
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-wa-en)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-wa-en).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-wa-en --output_dir ./ctranslate2/opus-mt-wa-en-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-wa-en-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-wa-en-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-wa-en-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-wa-en) by Helsinki-NLP.
|
AliE02/NaturalLanguagePioneersDPO | AliE02 | "2024-05-30T10:38:29Z" | 151 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"education",
"conversational",
"custom_code",
"en",
"dataset:argilla/ultrafeedback-binarized-preferences-cleaned",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-30T07:40:01Z" | ---
license: mit
datasets:
- argilla/ultrafeedback-binarized-preferences-cleaned
language:
- en
tags:
- education
--- |
oldiday/03fd8cf0-e0cb-4b66-85a3-839eea3291ec | oldiday | "2025-02-09T16:10:33Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-3B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-3B-Instruct",
"license:other",
"region:us"
] | null | "2025-02-09T12:20:02Z" | ---
library_name: peft
license: other
base_model: unsloth/Qwen2.5-3B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 03fd8cf0-e0cb-4b66-85a3-839eea3291ec
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-3B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d09ba17adfe7b60f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d09ba17adfe7b60f_train_data.json
type:
field_input: section
field_instruction: link
field_output: text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: oldiday/03fd8cf0-e0cb-4b66-85a3-839eea3291ec
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 64
lora_dropout: 0.2
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 600
micro_batch_size: 8
mlflow_experiment_name: /tmp/d09ba17adfe7b60f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1.0e-05
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 150
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: c1312b34-452b-4eb3-a006-bfeb96a41bfe
wandb_project: Gradients-On-Six
wandb_run: your_name
wandb_runid: c1312b34-452b-4eb3-a006-bfeb96a41bfe
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 03fd8cf0-e0cb-4b66-85a3-839eea3291ec
This model is a fine-tuned version of [unsloth/Qwen2.5-3B-Instruct](https://huggingface.co/unsloth/Qwen2.5-3B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2775
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-05
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 600
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 2.4756 |
| 2.3264 | 0.0062 | 50 | 2.3711 |
| 2.3937 | 0.0125 | 100 | 2.3475 |
| 2.3765 | 0.0187 | 150 | 2.3262 |
| 2.2978 | 0.0250 | 200 | 2.3141 |
| 2.3613 | 0.0312 | 250 | 2.3022 |
| 2.2792 | 0.0374 | 300 | 2.2954 |
| 2.3233 | 0.0437 | 350 | 2.2893 |
| 2.2736 | 0.0499 | 400 | 2.2839 |
| 2.2607 | 0.0562 | 450 | 2.2802 |
| 2.2556 | 0.0624 | 500 | 2.2781 |
| 2.2829 | 0.0686 | 550 | 2.2775 |
| 2.2796 | 0.0749 | 600 | 2.2775 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
earnxus/36fe7d88-2144-48dd-9b8d-dff42bd27d7d | earnxus | "2025-02-09T02:59:27Z" | 35 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Hermes-2-Theta-Llama-3-8B",
"base_model:adapter:NousResearch/Hermes-2-Theta-Llama-3-8B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-09T01:44:15Z" | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Hermes-2-Theta-Llama-3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 36fe7d88-2144-48dd-9b8d-dff42bd27d7d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Hermes-2-Theta-Llama-3-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 17bb0f1ae3fcc5f6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/17bb0f1ae3fcc5f6_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: earnxus/36fe7d88-2144-48dd-9b8d-dff42bd27d7d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0002
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.2
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 600
micro_batch_size: 2
mlflow_experiment_name: /tmp/17bb0f1ae3fcc5f6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 150
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 5ac99e5c-60a4-4d17-bdfb-2898b6bcbca1
wandb_project: Gradients-On-Nine
wandb_run: your_name
wandb_runid: 5ac99e5c-60a4-4d17-bdfb-2898b6bcbca1
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 36fe7d88-2144-48dd-9b8d-dff42bd27d7d
This model is a fine-tuned version of [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3664
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 600
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5773 | 0.0699 | 600 | 1.3664 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
RichardErkhov/zlsl_-_ru_warcraft-4bits | RichardErkhov | "2025-03-14T18:25:16Z" | 0 | 0 | null | [
"safetensors",
"gpt2",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-03-14T18:25:05Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
ru_warcraft - bnb 4bits
- Model creator: https://huggingface.co/zlsl/
- Original model: https://huggingface.co/zlsl/ru_warcraft/
Original model description:
---
license: gpl-3.0
language:
- ru
library_name: transformers
tags:
- warcraft
- world of warcraft
pipeline_tag: text-generation
---
ΠΠΎΠ΄Π΅Π»Ρ ΠΎΠ±ΡΡΠ΅Π½Π½Π°Ρ Π½Π° ΠΊΠ½ΠΈΠ³Π°Ρ
ΠΏΠΎ Warcraft
## ΠΠ»Ρ ΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΠ΅Π»Π΅ΠΉ text-generation-webui
Π ΠΈΠ½ΡΡΡΡΠΌΠ΅Π½ΡΠ΅ ΠΏΠΎΠ»ΠΎΠΌΠ°Π½Π° ΡΠ°Π±ΠΎΡΠ° Ρ GPT-2, GPTJ, GPT-NEO ΠΈ Π°Π½Π°Π»ΠΎΠ³ΠΈΡΠ½ΡΠΌΠΈ ΠΌΠΎΠ΄Π»ΡΠΌΠΈ, Π½Π΅Π²Π΅ΡΠ½ΠΎ Π·Π°Π³ΡΡΠΆΠ°Π΅ΡΡΡ ΡΠΎΠΊΠ΅Π½ΠΈΠ·Π΅Ρ.
ΠΡΠΈΠ±ΠΊΠ° ΡΠ°ΠΊΠ°Ρ:<br>
>eos_token_id = eos_token_id[0]
>IndexError: list index out of range
ΠΡΠΏΡΠ°Π²Π»ΡΠ΅ΡΡΡ Π»Π΅Π³ΠΊΠΎ, Π² ΡΠ°ΠΉΠ» modules/models.py Π² ΡΡΠ½ΠΊΡΠΈΡ load_tokenizer() Π½Π°Π΄ΠΎ Π΄ΠΎΠ±Π°Π²ΠΈΡΡ ΡΡΡΠΎΡΠΊΡ<br>
<code>tokenizer.eos_token_id = 2</code><br>
ΠΏΠ΅ΡΠ΅Π΄<br>
<code>return tokenizer</code>
|
souvenger/bart-cnn-samsum-finetuned | souvenger | "2023-12-26T17:51:31Z" | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large-cnn",
"base_model:finetune:facebook/bart-large-cnn",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-12-26T17:50:43Z" | ---
license: mit
base_model: facebook/bart-large-cnn
tags:
- generated_from_trainer
model-index:
- name: bart-cnn-samsum-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-samsum-finetuned
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1343
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1207 | 1.0 | 74 | 0.1343 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
alikanakar/whisper-synthesized-turkish-8-hour-llr | alikanakar | "2023-04-28T14:19:27Z" | 12 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-04-28T02:05:35Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: whisper-synthesized-turkish-8-hour-llr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-synthesized-turkish-8-hour-llr
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2166
- eval_wer: 13.5662
- eval_runtime: 518.2334
- eval_samples_per_second: 1.482
- eval_steps_per_second: 0.185
- epoch: 18.75
- step: 3600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
potradovec/bart-cnn-samsum-finetuned | potradovec | "2024-04-11T10:59:09Z" | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large-cnn",
"base_model:finetune:facebook/bart-large-cnn",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-04-11T03:16:10Z" | ---
license: mit
base_model: facebook/bart-large-cnn
tags:
- generated_from_trainer
model-index:
- name: bart-cnn-samsum-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-samsum-finetuned
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2177
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 6 | 0.3206 |
| 1.4764 | 2.0 | 12 | 0.2261 |
| 1.4764 | 3.0 | 18 | 0.2178 |
| 0.2188 | 4.0 | 24 | 0.2113 |
| 0.2077 | 5.0 | 30 | 0.2105 |
| 0.2077 | 6.0 | 36 | 0.2146 |
| 0.1529 | 7.0 | 42 | 0.2058 |
| 0.1529 | 8.0 | 48 | 0.2216 |
| 0.1482 | 9.0 | 54 | 0.2060 |
| 0.1433 | 10.0 | 60 | 0.2049 |
| 0.1433 | 11.0 | 66 | 0.2040 |
| 0.1153 | 12.0 | 72 | 0.2117 |
| 0.1153 | 13.0 | 78 | 0.2181 |
| 0.1022 | 14.0 | 84 | 0.2220 |
| 0.1145 | 15.0 | 90 | 0.2177 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cpu
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Iran-IOAIteam/AMLM-token-trained | Iran-IOAIteam | "2024-08-04T13:18:56Z" | 179 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2024-08-03T18:21:25Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
blockblockblock/smol_llama-220M-GQA-bpw4.2 | blockblockblock | "2024-03-12T17:18:09Z" | 1 | 0 | transformers | [
"transformers",
"llama",
"text-generation",
"smol_llama",
"llama2",
"en",
"dataset:JeanKaddour/minipile",
"dataset:pszemraj/simple_wikipedia_LM",
"dataset:mattymchen/refinedweb-3m",
"dataset:BEE-spoke-data/knowledge-inoc-concat-v1",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-12T06:11:37Z" | ---
language:
- en
license: apache-2.0
tags:
- smol_llama
- llama2
datasets:
- JeanKaddour/minipile
- pszemraj/simple_wikipedia_LM
- mattymchen/refinedweb-3m
- BEE-spoke-data/knowledge-inoc-concat-v1
inference:
parameters:
max_new_tokens: 64
do_sample: true
temperature: 0.8
repetition_penalty: 1.05
no_repeat_ngram_size: 4
eta_cutoff: 0.0006
renormalize_logits: true
widget:
- text: My name is El Microondas the Wise, and
example_title: El Microondas
- text: Kennesaw State University is a public
example_title: Kennesaw State University
- text: Bungie Studios is an American video game developer. They are most famous for
developing the award winning Halo series of video games. They also made Destiny.
The studio was founded
example_title: Bungie
- text: The Mona Lisa is a world-renowned painting created by
example_title: Mona Lisa
- text: The Harry Potter series, written by J.K. Rowling, begins with the book titled
example_title: Harry Potter Series
- text: 'Question: I have cities, but no houses. I have mountains, but no trees. I
have water, but no fish. What am I?
Answer:'
example_title: Riddle
- text: The process of photosynthesis involves the conversion of
example_title: Photosynthesis
- text: Jane went to the store to buy some groceries. She picked up apples, oranges,
and a loaf of bread. When she got home, she realized she forgot
example_title: Story Continuation
- text: 'Problem 2: If a train leaves Station A at 9:00 AM and travels at 60 mph,
and another train leaves Station B at 10:00 AM and travels at 80 mph, when will
they meet if the distance between the stations is 300 miles?
To determine'
example_title: Math Problem
- text: In the context of computer programming, an algorithm is
example_title: Algorithm Definition
pipeline_tag: text-generation
model-index:
- name: smol_llama-220M-GQA
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 24.83
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-GQA
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 29.76
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-GQA
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 25.85
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-GQA
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 44.55
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-GQA
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 50.99
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-GQA
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.68
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/smol_llama-220M-GQA
name: Open LLM Leaderboard
---
# smol_llama: 220M GQA
> model card WIP, more details to come
A small 220M param (total) decoder model. This is the first version of the model.
- 1024 hidden size, 10 layers
- GQA (32 heads, 8 key-value), context length 2048
- train-from-scratch on one GPU :)
## Links
[Here](https://huggingface.co/collections/BEE-spoke-data/finetuned-smol-220m-65998b080ae723e79c830f83) are some fine-tunes we did, but there are many more possibilities out there!
- instruct
- openhermes - [link](https://huggingface.co/BEE-spoke-data/smol_llama-220M-openhermes)
- open-instruct - [link](https://huggingface.co/BEE-spoke-data/smol_llama-220M-open_instruct)
- code
- python (pypi) - [link](https://huggingface.co/BEE-spoke-data/beecoder-220M-python)
- zephyr DPO tune
- SFT - [link](https://huggingface.co/BEE-spoke-data/zephyr-220m-sft-full)
- full DPO - [link](https://huggingface.co/BEE-spoke-data/zephyr-220m-dpo-full)
---
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_BEE-spoke-data__smol_llama-220M-GQA)
| Metric |Value|
|---------------------------------|----:|
|Avg. |29.44|
|AI2 Reasoning Challenge (25-Shot)|24.83|
|HellaSwag (10-Shot) |29.76|
|MMLU (5-Shot) |25.85|
|TruthfulQA (0-shot) |44.55|
|Winogrande (5-shot) |50.99|
|GSM8k (5-shot) | 0.68|
|
diegofiggie/fairness_model | diegofiggie | "2024-02-27T22:17:33Z" | 6 | 0 | setfit | [
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"model-index",
"region:us"
] | text-classification | "2024-02-27T22:15:35Z" | ---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
widget:
- text: 'first: We recommend self-help books on conflict resolution, available in
our office library, as supplemental resources. second: Our company conducts regular
surveys to identify and address recurring disputes.'
- text: 'first: Conflict Resolution Apps: We offer technology solutions for reporting
and tracking conflicts. second: Employees can request a mediator to assist in
resolving issues with their supervisor, ensuring fair dispute resolution.'
- text: 'first: Our organization encourages employees to participate in leadership
development programs, enhancing their ability to interact with supervisors. second:
Conflict Simulation Exercises: Role-playing helps employees practice resolving
conflicts.'
- text: 'first: Mediation sessions are scheduled outside of regular working hours
for convenience. second: Employee Conflict Coaches: Coaches work one-on-one with
employees to resolve disputes.'
- text: 'first: We provide conflict resolution pamphlets in the breakroom, offering
helpful tips. second: We provide resources for employees to seek external mediation
or counseling services if disputes with supervisors persist.'
pipeline_tag: text-classification
inference: true
base_model: sentence-transformers/all-MiniLM-L6-v2
model-index:
- name: SetFit with sentence-transformers/all-MiniLM-L6-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.4090909090909091
name: Accuracy
---
# SetFit with sentence-transformers/all-MiniLM-L6-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 256 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | <ul><li>'first: Employee Support Groups: Peer-led support groups for employees facing similar issues. second: We offer conflict resolution workshops to provide employees with valuable skills.'</li></ul> |
| 1 | <ul><li>'first: Conflict Resolution Peer Mentoring: Experienced employees mentor newcomers in conflict resolution. second: Diversity and Inclusion Training: Programs that promote understanding and reduce conflicts related to diversity.'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.4091 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the π€ Hub
model = SetFitModel.from_pretrained("diegofiggie/fairness_model")
# Run inference
preds = model("first: Mediation sessions are scheduled outside of regular working hours for convenience. second: Employee Conflict Coaches: Coaches work one-on-one with employees to resolve disputes.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 24 | 25.5 | 27 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 1 |
| 1 | 1 |
### Training Hyperparameters
- batch_size: (4, 4)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 30
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0333 | 1 | 0.2322 | - |
### Framework Versions
- Python: 3.10.9
- SetFit: 1.0.3
- Sentence Transformers: 2.4.0
- Transformers: 4.38.1
- PyTorch: 2.2.1+cpu
- Datasets: 2.17.1
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
flax-sentence-embeddings/all_datasets_v3_MiniLM-L12 | flax-sentence-embeddings | "2021-07-23T15:37:42Z" | 435 | 1 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"en",
"arxiv:2104.08727",
"arxiv:1810.09305",
"arxiv:2102.07033",
"arxiv:1904.06472",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2022-03-02T23:29:05Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
language: en
---
# Model description
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained [`MiniLM-L12`](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well
as intervention from Googleβs Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used as a sentence encoder. Given an input sentence, it ouptuts a vector which captures
the sentence semantic information. The sentence vector may be used for information retrieval, clustering or sentence
similarity tasks.
## How to use
Here is how to use this model to get the features of a given text using [SentenceTransformers](https://github.com/UKPLab/sentence-transformers) library:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('flax-sentence-embeddings/all_datasets_v3_MiniLM-L12')
text = "Replace me by any text you'd like."
text_embbedding = model.encode(text)
# array([-0.01559514, 0.04046123, 0.1317083 , 0.00085931, 0.04585106,
# -0.05607086, 0.0138078 , 0.03569756, 0.01420381, 0.04266302 ...],
# dtype=float32)
```
# Training procedure
## Pre-training
We use the pretrained [`MiniLM-L12`](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased). Please refer to the model
card for more detailed information about the pre-training procedure.
## Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
### Hyper parameters
We trained ou model on a TPU v3-8. We train the model during 540k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository.
### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|:--------------------------------------------------------:|:----------------------------------------:|:--------------------------:|
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_title_body_jsonl) | - | 364,001 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [COCO 2020](COCO 2020) | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [TriviaqQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Quora Question Pairs](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Eli5](https://huggingface.co/datasets/eli5)Β | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [SPECTER](https://github.com/allenai/specter) | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [S2ORC](https://github.com/allenai/s2orc) Title/Abstract | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [S2ORC](https://github.com/allenai/s2orc) Citation/Citation | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) Citation/Abstract | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [PAQ](https://github.com/facebookresearch/PAQ) | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| SearchQA | - | 582,261 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Title/Answer | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Title/Question | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) Question/Answer | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [MS MARCO](https://microsoft.github.io/msmarco/) | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [Reddit conversationnal](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| total | | 1,097,953,922 |
|
CompassioninMachineLearning/formatted20k_plusalpaca | CompassioninMachineLearning | "2024-11-28T06:28:24Z" | 10 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-28T06:24:16Z" | ---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sparrow8i8
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lesso09/b3735a79-6acb-44ef-a552-4339545a822c | lesso09 | "2025-02-22T18:19:28Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:Intel/neural-chat-7b-v3-3",
"base_model:adapter:Intel/neural-chat-7b-v3-3",
"license:apache-2.0",
"region:us"
] | null | "2025-02-22T15:27:41Z" | ---
library_name: peft
license: apache-2.0
base_model: Intel/neural-chat-7b-v3-3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b3735a79-6acb-44ef-a552-4339545a822c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
auto_find_batch_size: true
base_model: Intel/neural-chat-7b-v3-3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 33e2c25dba00b201_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/33e2c25dba00b201_train_data.json
type:
field_input: src_lang
field_instruction: src_sent
field_output: tgt_sent
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 50
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: true
hub_model_id: lesso09/b3735a79-6acb-44ef-a552-4339545a822c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000209
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/33e2c25dba00b201_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
seed: 90
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 09c04ab8-0f3d-4b26-b0d2-0162e695446c
wandb_project: 09a
wandb_run: your_name
wandb_runid: 09c04ab8-0f3d-4b26-b0d2-0162e695446c
warmup_steps: 50
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b3735a79-6acb-44ef-a552-4339545a822c
This model is a fine-tuned version of [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5476
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000209
- train_batch_size: 4
- eval_batch_size: 4
- seed: 90
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 4.4816 |
| 6.6279 | 0.0014 | 50 | 3.2339 |
| 6.1181 | 0.0028 | 100 | 2.7749 |
| 6.2009 | 0.0042 | 150 | 2.3707 |
| 6.3069 | 0.0056 | 200 | 2.3243 |
| 5.9682 | 0.0070 | 250 | 1.9809 |
| 5.9338 | 0.0084 | 300 | 1.7689 |
| 6.4019 | 0.0098 | 350 | 1.6779 |
| 5.4911 | 0.0112 | 400 | 1.5865 |
| 5.6447 | 0.0126 | 450 | 1.5542 |
| 5.3944 | 0.0140 | 500 | 1.5476 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
s-man2099/gpl-1000 | s-man2099 | "2023-10-28T13:39:06Z" | 13 | 0 | transformers | [
"transformers",
"tf",
"pegasus",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google/pegasus-large",
"base_model:finetune:google/pegasus-large",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-10-26T12:40:02Z" | ---
base_model: google/pegasus-large
tags:
- generated_from_keras_callback
model-index:
- name: s-man2099/gpl-1000
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# s-man2099/gpl-1000
This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.2314
- Validation Loss: 3.5643
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adafactor', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 6e-06, 'beta_2_decay': -0.8, 'epsilon_1': 1e-30, 'epsilon_2': 0.001, 'clip_threshold': 1.0, 'relative_step': True}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.9851 | 3.6010 | 0 |
| 3.7143 | 3.5460 | 1 |
| 3.5973 | 3.5299 | 2 |
| 3.5158 | 3.5266 | 3 |
| 3.4439 | 3.5331 | 4 |
| 3.3673 | 3.5419 | 5 |
| 3.2942 | 3.5538 | 6 |
| 3.2253 | 3.5580 | 7 |
| 3.2226 | 3.5617 | 8 |
| 3.2314 | 3.5643 | 9 |
### Framework versions
- Transformers 4.34.1
- TensorFlow 2.14.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Triangle104/Falcon3-7B-Instruct-abliterated-Q5_K_S-GGUF | Triangle104 | "2024-12-23T10:30:50Z" | 6 | 0 | transformers | [
"transformers",
"gguf",
"falcon3",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"en",
"fr",
"es",
"pt",
"base_model:huihui-ai/Falcon3-7B-Instruct-abliterated",
"base_model:quantized:huihui-ai/Falcon3-7B-Instruct-abliterated",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-12-23T10:30:27Z" | ---
language:
- en
- fr
- es
- pt
tags:
- falcon3
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
base_model: huihui-ai/Falcon3-7B-Instruct-abliterated
license: other
license_name: falcon-llm-license
license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html
library_name: transformers
---
# Triangle104/Falcon3-7B-Instruct-abliterated-Q5_K_S-GGUF
This model was converted to GGUF format from [`huihui-ai/Falcon3-7B-Instruct-abliterated`](https://huggingface.co/huihui-ai/Falcon3-7B-Instruct-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Falcon3-7B-Instruct-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Falcon3-7B-Instruct-abliterated-Q5_K_S-GGUF --hf-file falcon3-7b-instruct-abliterated-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Falcon3-7B-Instruct-abliterated-Q5_K_S-GGUF --hf-file falcon3-7b-instruct-abliterated-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Falcon3-7B-Instruct-abliterated-Q5_K_S-GGUF --hf-file falcon3-7b-instruct-abliterated-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Falcon3-7B-Instruct-abliterated-Q5_K_S-GGUF --hf-file falcon3-7b-instruct-abliterated-q5_k_s.gguf -c 2048
```
|
dbmdz/electra-base-french-europeana-cased-generator | dbmdz | "2023-09-06T22:19:06Z" | 267 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"electra",
"fill-mask",
"historic french",
"fr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-02T23:29:05Z" | ---
language: fr
license: mit
tags:
- "historic french"
---
# π€ + π dbmdz ELECTRA models
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources French Europeana ELECTRA models π
# French Europeana ELECTRA
We extracted all French texts using the `language` metadata attribute from the Europeana corpus.
The resulting corpus has a size of 63GB and consists of 11,052,528,456 tokens.
Based on the metadata information, texts from the 18th - 20th century are mainly included in the
training corpus.
Detailed information about the data and pretraining steps can be found in
[this repository](https://github.com/stefan-it/europeana-bert).
## Model weights
ELECTRA model weights for PyTorch and TensorFlow are available.
* French Europeana ELECTRA (discriminator): `dbmdz/electra-base-french-europeana-cased-discriminator` - [model hub page](https://huggingface.co/dbmdz/electra-base-french-europeana-cased-discriminator/tree/main)
* French Europeana ELECTRA (generator): `dbmdz/electra-base-french-europeana-cased-generator` - [model hub page](https://huggingface.co/dbmdz/electra-base-french-europeana-cased-generator/tree/main)
## Results
For results on Historic NER, please refer to [this repository](https://github.com/stefan-it/europeana-bert).
## Usage
With Transformers >= 2.3 our French Europeana ELECTRA model can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("dbmdz/electra-base-french-europeana-cased-discriminator")
model = AutoModel.from_pretrained("dbmdz/electra-base-french-europeana-cased-discriminator")
```
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our ELECTRA models just open an issue
[here](https://github.com/dbmdz/berts/issues/new) π€
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC β€οΈ
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download our models from their S3 storage π€
|
mateiaass/albert-base-qa-coQA-2-k-fold-3 | mateiaass | "2023-10-27T11:46:02Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"albert",
"question-answering",
"generated_from_trainer",
"base_model:albert/albert-base-v2",
"base_model:finetune:albert/albert-base-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2023-10-25T07:49:20Z" | ---
license: apache-2.0
base_model: albert-base-v2
tags:
- generated_from_trainer
model-index:
- name: albert-base-qa-coQA-2-k-fold-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-qa-coQA-2-k-fold-3
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.6892 | 1.0 | 5468 | 2.6897 |
| 2.3597 | 2.0 | 10936 | 2.6560 |
| 2.0666 | 3.0 | 16404 | 2.7332 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
JOSALPHO/my_awesome_qa_model | JOSALPHO | "2023-10-15T13:09:37Z" | 10 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2023-10-09T16:38:40Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6412
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.2562 |
| 2.7046 | 2.0 | 500 | 1.7025 |
| 2.7046 | 3.0 | 750 | 1.6412 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cpu
- Datasets 2.14.5
- Tokenizers 0.13.3
|
utkarsh309/Model | utkarsh309 | "2024-07-03T17:52:10Z" | 9 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-07-03T17:50:34Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sujatha2502/DialogRPT-updown-finetuned-wnli | sujatha2502 | "2022-10-10T07:01:47Z" | 101 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-10-10T04:57:14Z" | ---
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: DialogRPT-updown-finetuned-wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: wnli
split: train
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.4788732394366197
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DialogRPT-updown-finetuned-wnli
This model is a fine-tuned version of [microsoft/DialogRPT-updown](https://huggingface.co/microsoft/DialogRPT-updown) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7006
- Accuracy: 0.4789
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 0.7006 | 0.4789 |
| No log | 2.0 | 80 | 0.7067 | 0.4507 |
| No log | 3.0 | 120 | 0.7070 | 0.4789 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.12.1
|
hdve/Qwen-Qwen1.5-1.8B-1717353030 | hdve | "2024-06-02T18:32:45Z" | 132 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-02T18:31:00Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
prxy5604/7f1e7b03-a3ce-4776-ae7e-9da37dc24f7f | prxy5604 | "2025-01-14T05:54:52Z" | 5 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:deepseek-ai/deepseek-coder-6.7b-instruct",
"base_model:adapter:deepseek-ai/deepseek-coder-6.7b-instruct",
"license:other",
"region:us"
] | null | "2025-01-14T04:54:31Z" | ---
library_name: peft
license: other
base_model: deepseek-ai/deepseek-coder-6.7b-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7f1e7b03-a3ce-4776-ae7e-9da37dc24f7f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: deepseek-ai/deepseek-coder-6.7b-instruct
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- d8fc093a0cf4f749_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d8fc093a0cf4f749_train_data.json
type:
field_instruction: src
field_output: tgt
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: prxy5604/7f1e7b03-a3ce-4776-ae7e-9da37dc24f7f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/d8fc093a0cf4f749_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: cfde1278-7d97-4dce-9a14-d309b69c45ce
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: cfde1278-7d97-4dce-9a14-d309b69c45ce
warmup_steps: 30
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7f1e7b03-a3ce-4776-ae7e-9da37dc24f7f
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-6.7b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2097
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 30
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7855 | 0.0004 | 1 | 2.3196 |
| 1.3374 | 0.0185 | 50 | 1.3084 |
| 1.1874 | 0.0370 | 100 | 1.2381 |
| 1.0555 | 0.0554 | 150 | 1.2154 |
| 1.219 | 0.0739 | 200 | 1.2097 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nat-hunt/0c229019-cc64-4ccf-ab09-ace554d694d1 | nat-hunt | "2025-01-28T06:57:46Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2-7B-Instruct",
"base_model:adapter:Qwen/Qwen2-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-01-28T06:47:57Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0c229019-cc64-4ccf-ab09-ace554d694d1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2-7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e716b02cc9f9b733_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e716b02cc9f9b733_train_data.json
type:
field_input: sent1
field_instruction: startphrase
field_output: ending0
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: nat-hunt/0c229019-cc64-4ccf-ab09-ace554d694d1
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/e716b02cc9f9b733_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 64942535-ae63-4b26-9ba2-e925b575d5d5
wandb_project: Birthday-SN56-25-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 64942535-ae63-4b26-9ba2-e925b575d5d5
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 0c229019-cc64-4ccf-ab09-ace554d694d1
This model is a fine-tuned version of [Qwen/Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | nan |
| 0.0 | 0.0012 | 13 | nan |
| 0.0 | 0.0023 | 26 | nan |
| 0.0 | 0.0035 | 39 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Xu-Ouyang/pythia-410m-deduped-int2-step64-GPTQ-wikitext2-uva | Xu-Ouyang | "2024-09-17T08:00:23Z" | 61 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"2-bit",
"gptq",
"region:us"
] | text-generation | "2024-09-17T07:59:46Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
davidschulte/ESM_clue_tnews | davidschulte | "2025-03-26T13:55:57Z" | 16 | 0 | null | [
"safetensors",
"embedding_space_map",
"BaseLM:bert-base-multilingual-uncased",
"dataset:clue/clue",
"base_model:google-bert/bert-base-multilingual-uncased",
"base_model:finetune:google-bert/bert-base-multilingual-uncased",
"license:apache-2.0",
"region:us"
] | null | "2024-11-29T11:15:03Z" | ---
base_model: bert-base-multilingual-uncased
datasets:
- clue/clue
license: apache-2.0
tags:
- embedding_space_map
- BaseLM:bert-base-multilingual-uncased
---
# ESM clue/clue
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
ESM
- **Developed by:** David Schulte
- **Model type:** ESM
- **Base Model:** bert-base-multilingual-uncased
- **Intermediate Task:** clue/clue
- **ESM architecture:** linear
- **ESM embedding dimension:** 768
- **Language(s) (NLP):** [More Information Needed]
- **License:** Apache-2.0 license
- **ESM version:** 0.1.0
## Training Details
### Intermediate Task
- **Task ID:** clue/clue
- **Subset [optional]:** tnews
- **Text Column:** sentence
- **Label Column:** label
- **Dataset Split:** train
- **Sample size [optional]:** 10000
- **Sample seed [optional]:** 42
### Training Procedure [optional]
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Language Model Training Hyperparameters [optional]
- **Epochs:** 3
- **Batch size:** 32
- **Learning rate:** 2e-05
- **Weight Decay:** 0.01
- **Optimizer**: AdamW
### ESM Training Hyperparameters [optional]
- **Epochs:** 10
- **Batch size:** 32
- **Learning rate:** 0.001
- **Weight Decay:** 0.01
- **Optimizer**: AdamW
### Additional trainiung details [optional]
## Model evaluation
### Evaluation of fine-tuned language model [optional]
### Evaluation of ESM [optional]
MSE:
### Additional evaluation details [optional]
## What are Embedding Space Maps used for?
Embedding Space Maps are a part of ESM-LogME, a efficient method for finding intermediate datasets for transfer learning. There are two reasons to use ESM-LogME:
### You don't have enough training data for your problem
If you don't have a enough training data for your problem, just use ESM-LogME to find more.
You can supplement model training by including publicly available datasets in the training process.
1. Fine-tune a language model on suitable intermediate dataset.
2. Fine-tune the resulting model on your target dataset.
This workflow is called intermediate task transfer learning and it can significantly improve the target performance.
But what is a suitable dataset for your problem? ESM-LogME enable you to quickly rank thousands of datasets on the Hugging Face Hub by how well they are exptected to transfer to your target task.
### You want to find similar datasets to your target dataset
Using ESM-LogME can be used like search engine on the Hugging Face Hub. You can find similar tasks to your target task without having to rely on heuristics. ESM-LogME estimates how language models fine-tuned on each intermediate task would benefinit your target task. This quantitative approach combines the effects of domain similarity and task similarity.
## How can I use ESM-LogME / ESMs?
[](https://pypi.org/project/hf-dataset-selector)
We release **hf-dataset-selector**, a Python package for intermediate task selection using Embedding Space Maps.
**hf-dataset-selector** fetches ESMs for a given language model and uses it to find the best dataset for applying intermediate training to the target task. ESMs are found by their tags on the Huggingface Hub.
```python
from hfselect import Dataset, compute_task_ranking
# Load target dataset from the Hugging Face Hub
dataset = Dataset.from_hugging_face(
name="stanfordnlp/imdb",
split="train",
text_col="text",
label_col="label",
is_regression=False,
num_examples=1000,
seed=42
)
# Fetch ESMs and rank tasks
task_ranking = compute_task_ranking(
dataset=dataset,
model_name="bert-base-multilingual-uncased"
)
# Display top 5 recommendations
print(task_ranking[:5])
```
```python
1. davanstrien/test_imdb_embedd2 Score: -0.618529
2. davanstrien/test_imdb_embedd Score: -0.618644
3. davanstrien/test1 Score: -0.619334
4. stanfordnlp/imdb Score: -0.619454
5. stanfordnlp/sst Score: -0.62995
```
| Rank | Task ID | Task Subset | Text Column | Label Column | Task Split | Num Examples | ESM Architecture | Score |
|-------:|:------------------------------|:----------------|:--------------|:---------------|:-------------|---------------:|:-------------------|----------:|
| 1 | davanstrien/test_imdb_embedd2 | default | text | label | train | 10000 | linear | -0.618529 |
| 2 | davanstrien/test_imdb_embedd | default | text | label | train | 10000 | linear | -0.618644 |
| 3 | davanstrien/test1 | default | text | label | train | 10000 | linear | -0.619334 |
| 4 | stanfordnlp/imdb | plain_text | text | label | train | 10000 | linear | -0.619454 |
| 5 | stanfordnlp/sst | dictionary | phrase | label | dictionary | 10000 | linear | -0.62995 |
| 6 | stanfordnlp/sst | default | sentence | label | train | 8544 | linear | -0.63312 |
| 7 | kuroneko5943/snap21 | CDs_and_Vinyl_5 | sentence | label | train | 6974 | linear | -0.634365 |
| 8 | kuroneko5943/snap21 | Video_Games_5 | sentence | label | train | 6997 | linear | -0.638787 |
| 9 | kuroneko5943/snap21 | Movies_and_TV_5 | sentence | label | train | 6989 | linear | -0.639068 |
| 10 | fancyzhx/amazon_polarity | amazon_polarity | content | label | train | 10000 | linear | -0.639718 |
For more information on how to use ESMs please have a look at the [official Github repository](https://github.com/davidschulte/hf-dataset-selector). We provide documentation further documentation and tutorials for finding intermediate datasets and training your own ESMs.
## How do Embedding Space Maps work?
<!-- This section describes the evaluation protocols and provides the results. -->
Embedding Space Maps (ESMs) are neural networks that approximate the effect of fine-tuning a language model on a task. They can be used to quickly transform embeddings from a base model to approximate how a fine-tuned model would embed the the input text.
ESMs can be used for intermediate task selection with the ESM-LogME workflow.
## How can I use Embedding Space Maps for Intermediate Task Selection?
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
If you are using this Embedding Space Maps, please cite our [paper](https://aclanthology.org/2024.emnlp-main.529/).
**BibTeX:**
```
@inproceedings{schulte-etal-2024-less,
title = "Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning",
author = "Schulte, David and
Hamborg, Felix and
Akbik, Alan",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.529/",
doi = "10.18653/v1/2024.emnlp-main.529",
pages = "9431--9442",
abstract = "Intermediate task transfer learning can greatly improve model performance. If, for example, one has little training data for emotion detection, first fine-tuning a language model on a sentiment classification dataset may improve performance strongly. But which task to choose for transfer learning? Prior methods producing useful task rankings are infeasible for large source pools, as they require forward passes through all source language models. We overcome this by introducing Embedding Space Maps (ESMs), light-weight neural networks that approximate the effect of fine-tuning a language model. We conduct the largest study on NLP task transferability and task selection with 12k source-target pairs. We find that applying ESMs on a prior method reduces execution time and disk space usage by factors of 10 and 278, respectively, while retaining high selection performance (avg. regret@5 score of 2.95)."
}
```
**APA:**
```
Schulte, D., Hamborg, F., & Akbik, A. (2024, November). Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (pp. 9431-9442).
```
## Additional Information
|
Moriacrafter/Qwen1.5-1.8B-8bit_DepressionDetection | Moriacrafter | "2024-07-01T06:10:23Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-01T06:08:49Z" | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
geektech/flan-t5-large-lora-ce | geektech | "2024-01-06T08:54:28Z" | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:google/flan-t5-large",
"base_model:adapter:google/flan-t5-large",
"region:us"
] | null | "2024-01-06T06:00:57Z" | ---
library_name: peft
base_model: google/flan-t5-large
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
hmbert/flair-hipe-2022-hipe2020-de | hmbert | "2023-10-17T23:22:54Z" | 8 | 0 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-base-historic-multilingual-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-cased",
"license:mit",
"region:us"
] | token-classification | "2023-10-13T12:31:52Z" | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern
Truppen verlieΓ , um in einer Central - Lage bey Sligo die Operationen der Armee
persΓΆnlich zu dirigiren . Der Feind dΓΌrfte bald in die Enge kommen , da Gen .
Lacke mit 6000 Mann ihm entgegen marschirt .
---
# Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmBERT as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.7876][1] | [0.7978][2] | [0.7803][3] | [0.7859][4] | [0.7907][5] | 78.85 Β± 0.58 |
| bs8-e10-lr3e-05 | [0.7904][6] | [0.7884][7] | [0.7876][8] | [0.783][9] | [0.7894][10] | 78.78 Β± 0.26 |
| bs8-e10-lr5e-05 | [0.7939][11] | [0.7859][12] | [0.7825][13] | [0.7849][14] | [0.7853][15] | 78.65 Β± 0.39 |
| bs4-e10-lr5e-05 | [0.7943][16] | [0.786][17] | [0.7834][18] | [0.7824][19] | [0.7736][20] | 78.39 Β± 0.67 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa MΓ€rz](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Γano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs β€οΈ
|
DBangshu/GPT2_e5_7_6 | DBangshu | "2024-06-16T11:13:42Z" | 149 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-16T11:13:23Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/ai4bharat_-_hercule-hi-gguf | RichardErkhov | "2024-11-15T11:41:48Z" | 6 | 0 | null | [
"gguf",
"arxiv:2410.13394",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-11-15T08:02:07Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
hercule-hi - GGUF
- Model creator: https://huggingface.co/ai4bharat/
- Original model: https://huggingface.co/ai4bharat/hercule-hi/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [hercule-hi.Q2_K.gguf](https://huggingface.co/RichardErkhov/ai4bharat_-_hercule-hi-gguf/blob/main/hercule-hi.Q2_K.gguf) | Q2_K | 2.96GB |
| [hercule-hi.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ai4bharat_-_hercule-hi-gguf/blob/main/hercule-hi.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [hercule-hi.Q3_K.gguf](https://huggingface.co/RichardErkhov/ai4bharat_-_hercule-hi-gguf/blob/main/hercule-hi.Q3_K.gguf) | Q3_K | 3.74GB |
| [hercule-hi.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ai4bharat_-_hercule-hi-gguf/blob/main/hercule-hi.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [hercule-hi.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ai4bharat_-_hercule-hi-gguf/blob/main/hercule-hi.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [hercule-hi.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ai4bharat_-_hercule-hi-gguf/blob/main/hercule-hi.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [hercule-hi.Q4_0.gguf](https://huggingface.co/RichardErkhov/ai4bharat_-_hercule-hi-gguf/blob/main/hercule-hi.Q4_0.gguf) | Q4_0 | 4.34GB |
| [hercule-hi.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ai4bharat_-_hercule-hi-gguf/blob/main/hercule-hi.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [hercule-hi.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ai4bharat_-_hercule-hi-gguf/blob/main/hercule-hi.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [hercule-hi.Q4_K.gguf](https://huggingface.co/RichardErkhov/ai4bharat_-_hercule-hi-gguf/blob/main/hercule-hi.Q4_K.gguf) | Q4_K | 4.58GB |
| [hercule-hi.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ai4bharat_-_hercule-hi-gguf/blob/main/hercule-hi.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [hercule-hi.Q4_1.gguf](https://huggingface.co/RichardErkhov/ai4bharat_-_hercule-hi-gguf/blob/main/hercule-hi.Q4_1.gguf) | Q4_1 | 4.78GB |
| [hercule-hi.Q5_0.gguf](https://huggingface.co/RichardErkhov/ai4bharat_-_hercule-hi-gguf/blob/main/hercule-hi.Q5_0.gguf) | Q5_0 | 5.21GB |
| [hercule-hi.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ai4bharat_-_hercule-hi-gguf/blob/main/hercule-hi.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [hercule-hi.Q5_K.gguf](https://huggingface.co/RichardErkhov/ai4bharat_-_hercule-hi-gguf/blob/main/hercule-hi.Q5_K.gguf) | Q5_K | 5.34GB |
| [hercule-hi.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ai4bharat_-_hercule-hi-gguf/blob/main/hercule-hi.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [hercule-hi.Q5_1.gguf](https://huggingface.co/RichardErkhov/ai4bharat_-_hercule-hi-gguf/blob/main/hercule-hi.Q5_1.gguf) | Q5_1 | 5.65GB |
| [hercule-hi.Q6_K.gguf](https://huggingface.co/RichardErkhov/ai4bharat_-_hercule-hi-gguf/blob/main/hercule-hi.Q6_K.gguf) | Q6_K | 6.14GB |
| [hercule-hi.Q8_0.gguf](https://huggingface.co/RichardErkhov/ai4bharat_-_hercule-hi-gguf/blob/main/hercule-hi.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
library_name: transformers
license: mit
language:
- hi
metrics:
- pearsonr
- spearmanr
- accuracy
base_model:
- meta-llama/Llama-3.1-8B-Instruct
pipeline_tag: text-generation
---
# Model Card for Hercule
Hercule is a cross-lingual evaluation model introduced as part of the CIA Suite to assess multilingual Large Language Models (LLMs). It addresses the challenge of evaluating multilingual LLMs by using English reference responses to score multilingual outputs.
Fine-tuned on the INTEL dataset, Hercule demonstrates better alignment with human judgments compared to zero-shot evaluations by proprietary models like GPT-4, on the RECON test set. It excels particularly in low-resource scenarios and supports zero-shot evaluations on unseen languages. The model employs reference-based evaluation, providing feedback and scores on a 1-5 scale, and highlights the effectiveness of lightweight fine-tuning methods (like LoRA) for efficient multilingual evaluation. All FFT models and LoRA weights are available [here](https://huggingface.co/collections/ai4bharat/cia-suite-66ea9a7e18a6c70bd8de27a1).
# Model Details
## Model Description
- **Model type:** Evaluator Language model
- **Language(s) (NLP):** Hindi
- **Related Models:** [Hercule Models](https://huggingface.co/collections/ai4bharat/cia-suite-66ea9a7e18a6c70bd8de27a1)
- **Resources for more information:**
- [Research paper](https://arxiv.org/abs/2410.13394)
- [GitHub Repo](https://github.com/AI4Bharat/CIA)
Hercule in fine-tuned on [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) using Intel training data and evaluated on Recon test set. Models for other languages are available in [CIA Suite](https://huggingface.co/collections/ai4bharat/cia-suite-66ea9a7e18a6c70bd8de27a1).
## Prompt Format
Weβve developed wrapper functions and classes to make it easy to work with Hercule. Check them out on our [github repository](https://github.com/AI4Bharat/CIA) β we highly recommend using them!
If you only need to use the model for your specific use case, please follow the prompt format provided below.
### Reference Guided Direct Assessment
The Hercule model expects four input components: an evaluation instruction (multilingual), a response to evaluate (multilingual), a scoring rubric (English), and a reference answer (English). Use the prompt format provided below, ensuring that you include the instruction, response, reference answer, evaluation criteria, and a detailed score rubric for each score from 1 to 5.
After running inference with HERCULE, the output will include feedback and a score, separated by the phrase ```[RESULT]```.
```
###Task Description:
An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given.
1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general.
2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric.
3. The output format should look as follows: \"Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)\"
4. Please do not generate any other opening, closing, and explanations.
###The instruction to evaluate:
{instruction}
###Response to evaluate:
{response}
###Reference Answer (Score 5):
{reference_answer}
###Score Rubrics:
[{criteria}]
Score 1: {score1_rubric}
Score 2: {score2_rubric}
Score 3: {score3_rubric}
Score 4: {score4_rubric}
Score 5: {score5_rubric}
###Feedback:
```
We use the same evaluation prompt as used in [Prometheus 2](https://huggingface.co/prometheus-eval/prometheus-7b-v2.0).
## Links for Reference
- **Repository**: https://github.com/AI4Bharat/CIA
- **Paper**: https://arxiv.org/abs/2410.13394
- **Point of Contact**: [email protected], [email protected]
## License
Intel training data is created from [Feedback Collection](https://huggingface.co/datasets/prometheus-eval/Feedback-Collection) which is subject to OpenAI's Terms of Use for the generated data. If you suspect any violations, please reach out to us.
# Citation
If you find the following model helpful, please consider citing our paper!
**BibTeX:**
```bibtex
@article{doddapaneni2024crosslingual,
title = {Cross-Lingual Auto Evaluation for Assessing Multilingual LLMs},
author = {Sumanth Doddapaneni and Mohammed Safi Ur Rahman Khan and Dilip Venkatesh and Raj Dabre and Anoop Kunchukuttan and Mitesh M. Khapra},
year = {2024},
journal = {arXiv preprint arXiv: 2410.13394}
}
```
|
MayBashendy/ASAP_FineTuningBERT_Aug_k25_task1_organization_fold1 | MayBashendy | "2024-11-06T17:32:05Z" | 163 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-11-06T16:56:01Z" | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: ASAP_FineTuningBERT_Aug_k25_task1_organization_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASAP_FineTuningBERT_Aug_k25_task1_organization_fold1
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5742
- Qwk: 0.5276
- Mse: 0.5742
- Rmse: 0.7578
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:------:|:-------:|:------:|
| No log | 0.0049 | 2 | 10.1128 | 0.0 | 10.1128 | 3.1801 |
| No log | 0.0098 | 4 | 8.9943 | 0.0 | 8.9943 | 2.9990 |
| No log | 0.0147 | 6 | 7.7995 | 0.0324 | 7.7995 | 2.7928 |
| No log | 0.0197 | 8 | 6.5605 | 0.0016 | 6.5605 | 2.5614 |
| No log | 0.0246 | 10 | 5.2675 | 0.0 | 5.2675 | 2.2951 |
| No log | 0.0295 | 12 | 4.1268 | 0.0093 | 4.1268 | 2.0315 |
| No log | 0.0344 | 14 | 2.9789 | 0.0303 | 2.9789 | 1.7259 |
| No log | 0.0393 | 16 | 2.1328 | 0.0040 | 2.1328 | 1.4604 |
| No log | 0.0442 | 18 | 1.6251 | 0.0 | 1.6251 | 1.2748 |
| No log | 0.0491 | 20 | 1.2514 | 0.2066 | 1.2514 | 1.1186 |
| No log | 0.0541 | 22 | 1.0487 | 0.0768 | 1.0487 | 1.0241 |
| No log | 0.0590 | 24 | 0.9011 | 0.0211 | 0.9011 | 0.9492 |
| No log | 0.0639 | 26 | 0.8605 | 0.0106 | 0.8605 | 0.9276 |
| No log | 0.0688 | 28 | 0.8888 | 0.0211 | 0.8888 | 0.9427 |
| No log | 0.0737 | 30 | 0.8545 | 0.0 | 0.8545 | 0.9244 |
| No log | 0.0786 | 32 | 0.8596 | 0.0 | 0.8596 | 0.9271 |
| No log | 0.0835 | 34 | 0.8934 | 0.0782 | 0.8934 | 0.9452 |
| No log | 0.0885 | 36 | 0.8689 | 0.0171 | 0.8689 | 0.9322 |
| No log | 0.0934 | 38 | 0.9398 | 0.0 | 0.9398 | 0.9694 |
| No log | 0.0983 | 40 | 0.9685 | 0.0 | 0.9685 | 0.9841 |
| No log | 0.1032 | 42 | 0.8812 | 0.0 | 0.8812 | 0.9387 |
| No log | 0.1081 | 44 | 0.9606 | 0.0 | 0.9606 | 0.9801 |
| No log | 0.1130 | 46 | 0.9836 | 0.0 | 0.9836 | 0.9918 |
| No log | 0.1179 | 48 | 0.9136 | 0.0 | 0.9136 | 0.9558 |
| No log | 0.1229 | 50 | 0.8807 | 0.0 | 0.8807 | 0.9385 |
| No log | 0.1278 | 52 | 0.9246 | 0.0 | 0.9246 | 0.9616 |
| No log | 0.1327 | 54 | 0.9487 | 0.0106 | 0.9487 | 0.9740 |
| No log | 0.1376 | 56 | 0.9474 | 0.0326 | 0.9474 | 0.9733 |
| No log | 0.1425 | 58 | 0.8869 | 0.0172 | 0.8869 | 0.9418 |
| No log | 0.1474 | 60 | 0.8318 | 0.0 | 0.8318 | 0.9120 |
| No log | 0.1523 | 62 | 0.8245 | 0.0 | 0.8245 | 0.9080 |
| No log | 0.1572 | 64 | 0.8219 | 0.0 | 0.8219 | 0.9066 |
| No log | 0.1622 | 66 | 0.8613 | 0.0 | 0.8613 | 0.9281 |
| No log | 0.1671 | 68 | 0.8330 | 0.0 | 0.8330 | 0.9127 |
| No log | 0.1720 | 70 | 0.8057 | 0.0067 | 0.8057 | 0.8976 |
| No log | 0.1769 | 72 | 0.7668 | 0.0390 | 0.7668 | 0.8757 |
| No log | 0.1818 | 74 | 0.7325 | 0.0276 | 0.7325 | 0.8558 |
| No log | 0.1867 | 76 | 0.7240 | 0.0443 | 0.7240 | 0.8509 |
| No log | 0.1916 | 78 | 0.7276 | 0.0645 | 0.7276 | 0.8530 |
| No log | 0.1966 | 80 | 0.7571 | 0.0752 | 0.7571 | 0.8701 |
| No log | 0.2015 | 82 | 0.7769 | 0.0752 | 0.7769 | 0.8814 |
| No log | 0.2064 | 84 | 0.7690 | 0.0645 | 0.7690 | 0.8769 |
| No log | 0.2113 | 86 | 0.7435 | 0.0583 | 0.7435 | 0.8623 |
| No log | 0.2162 | 88 | 0.7259 | 0.0276 | 0.7259 | 0.8520 |
| No log | 0.2211 | 90 | 0.7201 | 0.0379 | 0.7201 | 0.8486 |
| No log | 0.2260 | 92 | 0.7152 | 0.0482 | 0.7152 | 0.8457 |
| No log | 0.2310 | 94 | 0.7174 | 0.0482 | 0.7174 | 0.8470 |
| No log | 0.2359 | 96 | 0.7270 | 0.0470 | 0.7270 | 0.8526 |
| No log | 0.2408 | 98 | 0.7385 | 0.2595 | 0.7385 | 0.8593 |
| No log | 0.2457 | 100 | 0.7141 | 0.1470 | 0.7141 | 0.8450 |
| No log | 0.2506 | 102 | 0.7350 | 0.1244 | 0.7350 | 0.8573 |
| No log | 0.2555 | 104 | 0.7392 | 0.1205 | 0.7392 | 0.8598 |
| No log | 0.2604 | 106 | 0.7598 | 0.0568 | 0.7598 | 0.8716 |
| No log | 0.2654 | 108 | 0.8377 | 0.0444 | 0.8377 | 0.9153 |
| No log | 0.2703 | 110 | 0.8516 | 0.0418 | 0.8516 | 0.9228 |
| No log | 0.2752 | 112 | 0.8401 | 0.0431 | 0.8401 | 0.9166 |
| No log | 0.2801 | 114 | 0.8037 | 0.0520 | 0.8037 | 0.8965 |
| No log | 0.2850 | 116 | 0.7879 | 0.0728 | 0.7879 | 0.8877 |
| No log | 0.2899 | 118 | 0.7801 | 0.1424 | 0.7801 | 0.8832 |
| No log | 0.2948 | 120 | 0.7344 | 0.1201 | 0.7344 | 0.8570 |
| No log | 0.2998 | 122 | 0.6831 | 0.1459 | 0.6831 | 0.8265 |
| No log | 0.3047 | 124 | 0.6612 | 0.1889 | 0.6612 | 0.8131 |
| No log | 0.3096 | 126 | 0.6524 | 0.3548 | 0.6524 | 0.8077 |
| No log | 0.3145 | 128 | 0.6201 | 0.4054 | 0.6201 | 0.7874 |
| No log | 0.3194 | 130 | 0.5923 | 0.3200 | 0.5923 | 0.7696 |
| No log | 0.3243 | 132 | 0.6082 | 0.2435 | 0.6082 | 0.7799 |
| No log | 0.3292 | 134 | 0.6437 | 0.1258 | 0.6437 | 0.8023 |
| No log | 0.3342 | 136 | 0.6357 | 0.1563 | 0.6357 | 0.7973 |
| No log | 0.3391 | 138 | 0.6285 | 0.4111 | 0.6285 | 0.7928 |
| No log | 0.3440 | 140 | 0.7422 | 0.4357 | 0.7422 | 0.8615 |
| No log | 0.3489 | 142 | 0.7150 | 0.4322 | 0.7150 | 0.8456 |
| No log | 0.3538 | 144 | 0.6028 | 0.4091 | 0.6028 | 0.7764 |
| No log | 0.3587 | 146 | 0.6015 | 0.4225 | 0.6015 | 0.7756 |
| No log | 0.3636 | 148 | 0.6951 | 0.4823 | 0.6951 | 0.8337 |
| No log | 0.3686 | 150 | 0.7038 | 0.4990 | 0.7038 | 0.8389 |
| No log | 0.3735 | 152 | 0.5787 | 0.4695 | 0.5787 | 0.7607 |
| No log | 0.3784 | 154 | 0.6215 | 0.3352 | 0.6215 | 0.7884 |
| No log | 0.3833 | 156 | 0.6272 | 0.3477 | 0.6272 | 0.7919 |
| No log | 0.3882 | 158 | 0.5507 | 0.4780 | 0.5507 | 0.7421 |
| No log | 0.3931 | 160 | 0.5994 | 0.4818 | 0.5994 | 0.7742 |
| No log | 0.3980 | 162 | 0.5815 | 0.4971 | 0.5815 | 0.7626 |
| No log | 0.4029 | 164 | 0.5675 | 0.3627 | 0.5675 | 0.7533 |
| No log | 0.4079 | 166 | 0.5865 | 0.2939 | 0.5865 | 0.7659 |
| No log | 0.4128 | 168 | 0.5698 | 0.3939 | 0.5698 | 0.7548 |
| No log | 0.4177 | 170 | 0.6356 | 0.4899 | 0.6356 | 0.7973 |
| No log | 0.4226 | 172 | 0.6942 | 0.4900 | 0.6942 | 0.8332 |
| No log | 0.4275 | 174 | 0.6633 | 0.4815 | 0.6633 | 0.8144 |
| No log | 0.4324 | 176 | 0.5872 | 0.4197 | 0.5872 | 0.7663 |
| No log | 0.4373 | 178 | 0.6004 | 0.2276 | 0.6004 | 0.7748 |
| No log | 0.4423 | 180 | 0.6033 | 0.2297 | 0.6033 | 0.7767 |
| No log | 0.4472 | 182 | 0.5766 | 0.3970 | 0.5766 | 0.7593 |
| No log | 0.4521 | 184 | 0.6689 | 0.4717 | 0.6689 | 0.8178 |
| No log | 0.4570 | 186 | 0.7695 | 0.4042 | 0.7695 | 0.8772 |
| No log | 0.4619 | 188 | 0.7469 | 0.4181 | 0.7469 | 0.8642 |
| No log | 0.4668 | 190 | 0.6979 | 0.3625 | 0.6979 | 0.8354 |
| No log | 0.4717 | 192 | 0.7124 | 0.2142 | 0.7124 | 0.8441 |
| No log | 0.4767 | 194 | 0.7172 | 0.3972 | 0.7172 | 0.8469 |
| No log | 0.4816 | 196 | 0.7136 | 0.4752 | 0.7136 | 0.8447 |
| No log | 0.4865 | 198 | 0.7077 | 0.4783 | 0.7077 | 0.8413 |
| No log | 0.4914 | 200 | 0.7011 | 0.4889 | 0.7011 | 0.8373 |
| No log | 0.4963 | 202 | 0.6820 | 0.4918 | 0.6820 | 0.8258 |
| No log | 0.5012 | 204 | 0.6660 | 0.5004 | 0.6660 | 0.8161 |
| No log | 0.5061 | 206 | 0.6313 | 0.5193 | 0.6313 | 0.7945 |
| No log | 0.5111 | 208 | 0.6562 | 0.5317 | 0.6562 | 0.8101 |
| No log | 0.5160 | 210 | 0.5680 | 0.5665 | 0.5680 | 0.7537 |
| No log | 0.5209 | 212 | 0.5510 | 0.5565 | 0.5510 | 0.7423 |
| No log | 0.5258 | 214 | 0.5106 | 0.5486 | 0.5106 | 0.7146 |
| No log | 0.5307 | 216 | 0.5433 | 0.5795 | 0.5433 | 0.7371 |
| No log | 0.5356 | 218 | 0.4979 | 0.5820 | 0.4979 | 0.7056 |
| No log | 0.5405 | 220 | 0.4783 | 0.5050 | 0.4783 | 0.6916 |
| No log | 0.5455 | 222 | 0.4630 | 0.5287 | 0.4630 | 0.6805 |
| No log | 0.5504 | 224 | 0.4581 | 0.5551 | 0.4581 | 0.6768 |
| No log | 0.5553 | 226 | 0.5263 | 0.5927 | 0.5263 | 0.7255 |
| No log | 0.5602 | 228 | 0.7635 | 0.4351 | 0.7635 | 0.8738 |
| No log | 0.5651 | 230 | 1.0279 | 0.2025 | 1.0279 | 1.0138 |
| No log | 0.5700 | 232 | 1.0434 | 0.2820 | 1.0434 | 1.0215 |
| No log | 0.5749 | 234 | 0.8612 | 0.3846 | 0.8612 | 0.9280 |
| No log | 0.5799 | 236 | 0.7987 | 0.4225 | 0.7987 | 0.8937 |
| No log | 0.5848 | 238 | 0.8258 | 0.4022 | 0.8258 | 0.9087 |
| No log | 0.5897 | 240 | 0.7656 | 0.4263 | 0.7656 | 0.8750 |
| No log | 0.5946 | 242 | 0.7307 | 0.4419 | 0.7307 | 0.8548 |
| No log | 0.5995 | 244 | 0.7634 | 0.4449 | 0.7634 | 0.8737 |
| No log | 0.6044 | 246 | 0.6035 | 0.4980 | 0.6035 | 0.7769 |
| No log | 0.6093 | 248 | 0.5288 | 0.4402 | 0.5288 | 0.7272 |
| No log | 0.6143 | 250 | 0.5195 | 0.4752 | 0.5195 | 0.7207 |
| No log | 0.6192 | 252 | 0.5899 | 0.5062 | 0.5899 | 0.7681 |
| No log | 0.6241 | 254 | 0.6204 | 0.5011 | 0.6204 | 0.7877 |
| No log | 0.6290 | 256 | 0.7014 | 0.4740 | 0.7014 | 0.8375 |
| No log | 0.6339 | 258 | 0.6151 | 0.4904 | 0.6151 | 0.7843 |
| No log | 0.6388 | 260 | 0.5681 | 0.4732 | 0.5681 | 0.7537 |
| No log | 0.6437 | 262 | 0.5711 | 0.3029 | 0.5711 | 0.7557 |
| No log | 0.6486 | 264 | 0.5710 | 0.3919 | 0.5710 | 0.7557 |
| No log | 0.6536 | 266 | 0.5865 | 0.4336 | 0.5865 | 0.7658 |
| No log | 0.6585 | 268 | 0.5858 | 0.4150 | 0.5858 | 0.7654 |
| No log | 0.6634 | 270 | 0.5771 | 0.2926 | 0.5771 | 0.7597 |
| No log | 0.6683 | 272 | 0.5823 | 0.2582 | 0.5823 | 0.7631 |
| No log | 0.6732 | 274 | 0.5503 | 0.4403 | 0.5503 | 0.7418 |
| No log | 0.6781 | 276 | 0.6317 | 0.5141 | 0.6317 | 0.7948 |
| No log | 0.6830 | 278 | 0.6959 | 0.4922 | 0.6959 | 0.8342 |
| No log | 0.6880 | 280 | 0.6101 | 0.5248 | 0.6101 | 0.7811 |
| No log | 0.6929 | 282 | 0.5580 | 0.4842 | 0.5580 | 0.7470 |
| No log | 0.6978 | 284 | 0.5688 | 0.4833 | 0.5688 | 0.7542 |
| No log | 0.7027 | 286 | 0.6073 | 0.5096 | 0.6073 | 0.7793 |
| No log | 0.7076 | 288 | 0.6491 | 0.5226 | 0.6491 | 0.8057 |
| No log | 0.7125 | 290 | 0.6436 | 0.5091 | 0.6436 | 0.8023 |
| No log | 0.7174 | 292 | 0.6434 | 0.5084 | 0.6434 | 0.8021 |
| No log | 0.7224 | 294 | 0.5828 | 0.4337 | 0.5828 | 0.7634 |
| No log | 0.7273 | 296 | 0.5625 | 0.3556 | 0.5625 | 0.7500 |
| No log | 0.7322 | 298 | 0.5582 | 0.3241 | 0.5582 | 0.7471 |
| No log | 0.7371 | 300 | 0.5544 | 0.4767 | 0.5544 | 0.7446 |
| No log | 0.7420 | 302 | 0.6449 | 0.5024 | 0.6449 | 0.8031 |
| No log | 0.7469 | 304 | 0.6234 | 0.5138 | 0.6234 | 0.7896 |
| No log | 0.7518 | 306 | 0.5243 | 0.5019 | 0.5243 | 0.7241 |
| No log | 0.7568 | 308 | 0.5382 | 0.3475 | 0.5382 | 0.7336 |
| No log | 0.7617 | 310 | 0.5320 | 0.3510 | 0.5320 | 0.7294 |
| No log | 0.7666 | 312 | 0.4957 | 0.4885 | 0.4957 | 0.7040 |
| No log | 0.7715 | 314 | 0.5830 | 0.5293 | 0.5830 | 0.7635 |
| No log | 0.7764 | 316 | 0.5886 | 0.5480 | 0.5886 | 0.7672 |
| No log | 0.7813 | 318 | 0.4838 | 0.5468 | 0.4838 | 0.6956 |
| No log | 0.7862 | 320 | 0.4668 | 0.5205 | 0.4668 | 0.6832 |
| No log | 0.7912 | 322 | 0.4647 | 0.4996 | 0.4647 | 0.6817 |
| No log | 0.7961 | 324 | 0.4582 | 0.5328 | 0.4582 | 0.6769 |
| No log | 0.8010 | 326 | 0.4574 | 0.5561 | 0.4574 | 0.6763 |
| No log | 0.8059 | 328 | 0.4591 | 0.5546 | 0.4591 | 0.6775 |
| No log | 0.8108 | 330 | 0.4420 | 0.5515 | 0.4420 | 0.6648 |
| No log | 0.8157 | 332 | 0.4386 | 0.5533 | 0.4386 | 0.6623 |
| No log | 0.8206 | 334 | 0.4409 | 0.5459 | 0.4409 | 0.6640 |
| No log | 0.8256 | 336 | 0.4340 | 0.5524 | 0.4340 | 0.6588 |
| No log | 0.8305 | 338 | 0.4475 | 0.5581 | 0.4475 | 0.6689 |
| No log | 0.8354 | 340 | 0.4293 | 0.5655 | 0.4293 | 0.6552 |
| No log | 0.8403 | 342 | 0.4330 | 0.5695 | 0.4330 | 0.6580 |
| No log | 0.8452 | 344 | 0.4234 | 0.5587 | 0.4234 | 0.6507 |
| No log | 0.8501 | 346 | 0.4824 | 0.5736 | 0.4824 | 0.6945 |
| No log | 0.8550 | 348 | 0.5140 | 0.5911 | 0.5140 | 0.7169 |
| No log | 0.8600 | 350 | 0.4262 | 0.5602 | 0.4262 | 0.6529 |
| No log | 0.8649 | 352 | 0.4381 | 0.5275 | 0.4381 | 0.6619 |
| No log | 0.8698 | 354 | 0.4407 | 0.5713 | 0.4407 | 0.6639 |
| No log | 0.8747 | 356 | 0.6305 | 0.5876 | 0.6305 | 0.7940 |
| No log | 0.8796 | 358 | 0.7397 | 0.5399 | 0.7397 | 0.8601 |
| No log | 0.8845 | 360 | 0.5972 | 0.5745 | 0.5972 | 0.7728 |
| No log | 0.8894 | 362 | 0.4624 | 0.5444 | 0.4624 | 0.6800 |
| No log | 0.8943 | 364 | 0.4427 | 0.5714 | 0.4427 | 0.6654 |
| No log | 0.8993 | 366 | 0.4513 | 0.5967 | 0.4513 | 0.6718 |
| No log | 0.9042 | 368 | 0.5772 | 0.5873 | 0.5772 | 0.7597 |
| No log | 0.9091 | 370 | 0.6064 | 0.6086 | 0.6064 | 0.7787 |
| No log | 0.9140 | 372 | 0.4612 | 0.6155 | 0.4612 | 0.6791 |
| No log | 0.9189 | 374 | 0.4125 | 0.5595 | 0.4125 | 0.6423 |
| No log | 0.9238 | 376 | 0.4153 | 0.5622 | 0.4153 | 0.6445 |
| No log | 0.9287 | 378 | 0.4368 | 0.5968 | 0.4368 | 0.6609 |
| No log | 0.9337 | 380 | 0.4642 | 0.6211 | 0.4642 | 0.6813 |
| No log | 0.9386 | 382 | 0.4825 | 0.6245 | 0.4825 | 0.6946 |
| No log | 0.9435 | 384 | 0.4562 | 0.6044 | 0.4562 | 0.6755 |
| No log | 0.9484 | 386 | 0.4663 | 0.6003 | 0.4663 | 0.6828 |
| No log | 0.9533 | 388 | 0.5363 | 0.6060 | 0.5363 | 0.7323 |
| No log | 0.9582 | 390 | 0.7487 | 0.5385 | 0.7487 | 0.8653 |
| No log | 0.9631 | 392 | 0.7755 | 0.5165 | 0.7755 | 0.8806 |
| No log | 0.9681 | 394 | 0.6010 | 0.5651 | 0.6010 | 0.7753 |
| No log | 0.9730 | 396 | 0.5072 | 0.5756 | 0.5072 | 0.7122 |
| No log | 0.9779 | 398 | 0.5508 | 0.5799 | 0.5508 | 0.7422 |
| No log | 0.9828 | 400 | 0.6093 | 0.5552 | 0.6093 | 0.7806 |
| No log | 0.9877 | 402 | 0.7580 | 0.5384 | 0.7580 | 0.8706 |
| No log | 0.9926 | 404 | 0.7525 | 0.5377 | 0.7525 | 0.8675 |
| No log | 0.9975 | 406 | 0.6594 | 0.5489 | 0.6594 | 0.8120 |
| No log | 1.0025 | 408 | 0.6561 | 0.5508 | 0.6561 | 0.8100 |
| No log | 1.0074 | 410 | 0.5611 | 0.5819 | 0.5611 | 0.7490 |
| No log | 1.0123 | 412 | 0.5213 | 0.5625 | 0.5213 | 0.7220 |
| No log | 1.0172 | 414 | 0.5723 | 0.5771 | 0.5723 | 0.7565 |
| No log | 1.0221 | 416 | 0.5687 | 0.5930 | 0.5687 | 0.7541 |
| No log | 1.0270 | 418 | 0.4838 | 0.6001 | 0.4838 | 0.6956 |
| No log | 1.0319 | 420 | 0.4607 | 0.6038 | 0.4607 | 0.6788 |
| No log | 1.0369 | 422 | 0.4615 | 0.6063 | 0.4615 | 0.6794 |
| No log | 1.0418 | 424 | 0.4450 | 0.5948 | 0.4450 | 0.6671 |
| No log | 1.0467 | 426 | 0.4441 | 0.6054 | 0.4441 | 0.6664 |
| No log | 1.0516 | 428 | 0.4670 | 0.6159 | 0.4670 | 0.6834 |
| No log | 1.0565 | 430 | 0.5026 | 0.6090 | 0.5026 | 0.7090 |
| No log | 1.0614 | 432 | 0.4743 | 0.5952 | 0.4743 | 0.6887 |
| No log | 1.0663 | 434 | 0.4243 | 0.5902 | 0.4243 | 0.6514 |
| No log | 1.0713 | 436 | 0.4351 | 0.5888 | 0.4351 | 0.6596 |
| No log | 1.0762 | 438 | 0.4700 | 0.6032 | 0.4700 | 0.6855 |
| No log | 1.0811 | 440 | 0.4343 | 0.5845 | 0.4343 | 0.6590 |
| No log | 1.0860 | 442 | 0.4483 | 0.5462 | 0.4483 | 0.6696 |
| No log | 1.0909 | 444 | 0.4533 | 0.5365 | 0.4533 | 0.6733 |
| No log | 1.0958 | 446 | 0.4362 | 0.5746 | 0.4362 | 0.6605 |
| No log | 1.1007 | 448 | 0.4928 | 0.5906 | 0.4928 | 0.7020 |
| No log | 1.1057 | 450 | 0.5399 | 0.6173 | 0.5399 | 0.7348 |
| No log | 1.1106 | 452 | 0.4619 | 0.5991 | 0.4619 | 0.6797 |
| No log | 1.1155 | 454 | 0.4252 | 0.5727 | 0.4252 | 0.6521 |
| No log | 1.1204 | 456 | 0.4228 | 0.5741 | 0.4228 | 0.6502 |
| No log | 1.1253 | 458 | 0.4567 | 0.6257 | 0.4567 | 0.6758 |
| No log | 1.1302 | 460 | 0.6388 | 0.6334 | 0.6388 | 0.7992 |
| No log | 1.1351 | 462 | 0.6192 | 0.6482 | 0.6192 | 0.7869 |
| No log | 1.1400 | 464 | 0.4595 | 0.6285 | 0.4595 | 0.6778 |
| No log | 1.1450 | 466 | 0.4330 | 0.5952 | 0.4330 | 0.6580 |
| No log | 1.1499 | 468 | 0.4991 | 0.6471 | 0.4991 | 0.7065 |
| No log | 1.1548 | 470 | 0.6608 | 0.7030 | 0.6608 | 0.8129 |
| No log | 1.1597 | 472 | 0.5729 | 0.6976 | 0.5729 | 0.7569 |
| No log | 1.1646 | 474 | 0.4662 | 0.6349 | 0.4662 | 0.6828 |
| No log | 1.1695 | 476 | 0.4311 | 0.6056 | 0.4311 | 0.6566 |
| No log | 1.1744 | 478 | 0.4604 | 0.6280 | 0.4604 | 0.6786 |
| No log | 1.1794 | 480 | 0.5520 | 0.6610 | 0.5520 | 0.7430 |
| No log | 1.1843 | 482 | 0.5067 | 0.6294 | 0.5067 | 0.7118 |
| No log | 1.1892 | 484 | 0.4372 | 0.5604 | 0.4372 | 0.6612 |
| No log | 1.1941 | 486 | 0.4510 | 0.4927 | 0.4510 | 0.6716 |
| No log | 1.1990 | 488 | 0.4446 | 0.4944 | 0.4446 | 0.6668 |
| No log | 1.2039 | 490 | 0.4548 | 0.5763 | 0.4548 | 0.6744 |
| No log | 1.2088 | 492 | 0.4975 | 0.6070 | 0.4975 | 0.7053 |
| No log | 1.2138 | 494 | 0.5297 | 0.6055 | 0.5297 | 0.7278 |
| No log | 1.2187 | 496 | 0.5612 | 0.6027 | 0.5612 | 0.7492 |
| No log | 1.2236 | 498 | 0.4947 | 0.5789 | 0.4947 | 0.7034 |
| 0.5107 | 1.2285 | 500 | 0.4709 | 0.5476 | 0.4709 | 0.6862 |
| 0.5107 | 1.2334 | 502 | 0.4801 | 0.5732 | 0.4801 | 0.6929 |
| 0.5107 | 1.2383 | 504 | 0.5205 | 0.5463 | 0.5205 | 0.7215 |
| 0.5107 | 1.2432 | 506 | 0.6151 | 0.5699 | 0.6151 | 0.7843 |
| 0.5107 | 1.2482 | 508 | 0.5700 | 0.5693 | 0.5700 | 0.7550 |
| 0.5107 | 1.2531 | 510 | 0.4834 | 0.5265 | 0.4834 | 0.6953 |
| 0.5107 | 1.2580 | 512 | 0.4777 | 0.5232 | 0.4777 | 0.6912 |
| 0.5107 | 1.2629 | 514 | 0.5004 | 0.5599 | 0.5004 | 0.7074 |
| 0.5107 | 1.2678 | 516 | 0.6491 | 0.5823 | 0.6491 | 0.8056 |
| 0.5107 | 1.2727 | 518 | 0.7351 | 0.6037 | 0.7351 | 0.8574 |
| 0.5107 | 1.2776 | 520 | 0.5979 | 0.5796 | 0.5979 | 0.7733 |
| 0.5107 | 1.2826 | 522 | 0.4755 | 0.5737 | 0.4755 | 0.6896 |
| 0.5107 | 1.2875 | 524 | 0.4747 | 0.4636 | 0.4747 | 0.6890 |
| 0.5107 | 1.2924 | 526 | 0.4686 | 0.4788 | 0.4686 | 0.6845 |
| 0.5107 | 1.2973 | 528 | 0.4581 | 0.5544 | 0.4581 | 0.6768 |
| 0.5107 | 1.3022 | 530 | 0.5497 | 0.6133 | 0.5497 | 0.7414 |
| 0.5107 | 1.3071 | 532 | 0.5933 | 0.6177 | 0.5933 | 0.7703 |
| 0.5107 | 1.3120 | 534 | 0.4957 | 0.5906 | 0.4957 | 0.7041 |
| 0.5107 | 1.3170 | 536 | 0.4449 | 0.5474 | 0.4449 | 0.6670 |
| 0.5107 | 1.3219 | 538 | 0.4461 | 0.5397 | 0.4461 | 0.6679 |
| 0.5107 | 1.3268 | 540 | 0.4911 | 0.5874 | 0.4911 | 0.7008 |
| 0.5107 | 1.3317 | 542 | 0.5566 | 0.6092 | 0.5566 | 0.7461 |
| 0.5107 | 1.3366 | 544 | 0.6142 | 0.5899 | 0.6142 | 0.7837 |
| 0.5107 | 1.3415 | 546 | 0.5344 | 0.5407 | 0.5344 | 0.7310 |
| 0.5107 | 1.3464 | 548 | 0.5157 | 0.4574 | 0.5157 | 0.7181 |
| 0.5107 | 1.3514 | 550 | 0.5250 | 0.4391 | 0.5250 | 0.7246 |
| 0.5107 | 1.3563 | 552 | 0.5342 | 0.4987 | 0.5342 | 0.7309 |
| 0.5107 | 1.3612 | 554 | 0.5742 | 0.5276 | 0.5742 | 0.7578 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
PrunaAI/ResplendentAI-Aura_v2_7B-AWQ-4bit-smashed | PrunaAI | "2024-07-15T19:46:15Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"pruna-ai",
"base_model:ResplendentAI/Aura_v2_7B",
"base_model:quantized:ResplendentAI/Aura_v2_7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] | text-generation | "2024-07-15T19:44:23Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: ResplendentAI/Aura_v2_7B
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with awq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo ResplendentAI/Aura_v2_7B installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install autoawq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from awq import AutoAWQForCausalLM
model = AutoAWQForCausalLM.from_quantized("PrunaAI/ResplendentAI-Aura_v2_7B-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("ResplendentAI/Aura_v2_7B")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model ResplendentAI/Aura_v2_7B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
ludoviciarraga/checkpoint-319 | ludoviciarraga | "2024-01-11T17:58:37Z" | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai/whisper-large-v2",
"base_model:adapter:openai/whisper-large-v2",
"region:us"
] | null | "2024-01-11T17:58:28Z" | ---
library_name: peft
base_model: openai/whisper-large-v2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
### Framework versions
- PEFT 0.6.2
|
nttx/41e6f11a-6f29-4efa-b971-55dcaf7190c2 | nttx | "2025-01-22T10:24:41Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"opt",
"axolotl",
"generated_from_trainer",
"base_model:facebook/opt-125m",
"base_model:adapter:facebook/opt-125m",
"license:other",
"region:us"
] | null | "2025-01-22T10:21:15Z" | ---
library_name: peft
license: other
base_model: facebook/opt-125m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 41e6f11a-6f29-4efa-b971-55dcaf7190c2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: facebook/opt-125m
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 807edbe01d3143fb_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/807edbe01d3143fb_train_data.json
type:
field_input: question
field_instruction: answer
field_output: context
field_system: distractors
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: nttx/41e6f11a-6f29-4efa-b971-55dcaf7190c2
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/807edbe01d3143fb_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 87d40317-ca50-4c35-ad9f-1a82b7dfae06
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 87d40317-ca50-4c35-ad9f-1a82b7dfae06
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 41e6f11a-6f29-4efa-b971-55dcaf7190c2
This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1774
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 12.4849 | 0.0006 | 1 | 3.3527 |
| 13.8711 | 0.0281 | 50 | 3.3471 |
| 14.7456 | 0.0562 | 100 | 3.2220 |
| 15.1491 | 0.0843 | 150 | 3.1837 |
| 14.6044 | 0.1125 | 200 | 3.1774 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
melisa/angular_distance_last_mistralai_Mistral-7B-Instruct-v0.3_cut_22_5 | melisa | "2024-05-26T11:05:29Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-26T11:01:46Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
YueNoraWang/test1 | YueNoraWang | "2024-04-30T13:49:09Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-04-30T13:48:56Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AlSamCur123/DeepSeek-R1-Distill-Llama-8B | AlSamCur123 | "2025-02-02T13:54:49Z" | 1,039 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit",
"base_model:quantized:unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-31T01:55:32Z" | ---
base_model: unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** AlSamCur123
- **License:** apache-2.0
- **Finetuned from model :** unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
furiosa-ai/Llama-3.1-8B-Instruct | furiosa-ai | "2025-04-08T21:36:05Z" | 5 | 0 | furiosa-llm | [
"furiosa-llm",
"llama",
"furiosa-ai",
"llama-3",
"text-generation",
"conversational",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"region:us"
] | text-generation | "2025-04-03T00:25:44Z" | ---
base_model: meta-llama/Llama-3.1-8B-Instruct
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
license: llama3.1
pipeline_tag: text-generation
library_name: furiosa-llm
tags:
- furiosa-ai
- llama
- llama-3
---
# Model Overview
- **Model Architecture:** Meta-Llama-3
- **Input:** Text
- **Output:** Text
- **Model Optimizations:**
- **Context Length:** 32k tokens
- Maximum Prompt Length: 32768 tokens
- Maximum Generation Length: 32768 tokens
- **Intended Use Cases:** Intended for commercial and non-commercial use. Same as [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct), this models is intended for assistant-like chat.
- **Release Date:** 04/12/2025
- **Version:** v2025.2
- **License(s):** [Llama3.1](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct/blob/main/LICENSE)
- **Supported Inference Engine(s):** Furiosa LLM
- **Supported Hardware Compatibility:** FuriosaAI RNGD
- **Preferred Operating System(s):** Linux
- **Quantization:** No
- **Datasets:**
## Description:
The FuriosaAI Meta-Llama-3.1-8B-Instruct model is the pre-compiled version of the [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct), which is an auto-regressive language model that uses an optimized transformer architecture.
This model is ready for commercial and non-commercial use. <br>
## Usage
To run this model with [Furiosa-LLM](https://developer.furiosa.ai/latest/en/furiosa_llm/intro.html), follow the sample command below after [installing Furiosa-LLM and its prerequisites](https://developer.furiosa.ai/latest/en/getting_started/furiosa_llm.html#installing-furiosa-llm).
```sh
furiosa-llm serve furiosa-ai/Llama-3.1-8B-Instruct
```
|
jmcallister/llm-wikipedia-game-qwen-2.5-0.5b-v0 | jmcallister | "2025-03-23T21:47:17Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-03-23T21:43:52Z" | ---
base_model: unsloth/qwen2.5-0.5b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** jmcallister
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-0.5b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
c4st4/_llama3.2-3b-gguf | c4st4 | "2025-03-19T14:25:25Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-19T14:25:02Z" | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** c4st4
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AnonymousCS/populism_model219 | AnonymousCS | "2025-01-22T21:45:03Z" | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:AnonymousCS/populism_multilingual_bert_uncased",
"base_model:finetune:AnonymousCS/populism_multilingual_bert_uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-22T21:43:13Z" | ---
library_name: transformers
license: apache-2.0
base_model: AnonymousCS/populism_multilingual_bert_uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: populism_model219
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# populism_model219
This model is a fine-tuned version of [AnonymousCS/populism_multilingual_bert_uncased](https://huggingface.co/AnonymousCS/populism_multilingual_bert_uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2720
- Accuracy: 0.9430
- 1-f1: 0.5581
- 1-recall: 0.6857
- 1-precision: 0.4706
- Balanced Acc: 0.8215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.6293 | 1.0 | 42 | 0.2575 | 0.9490 | 0.5952 | 0.7143 | 0.5102 | 0.8382 |
| 0.2618 | 2.0 | 84 | 0.2198 | 0.9205 | 0.5546 | 0.9429 | 0.3929 | 0.9311 |
| 0.1608 | 3.0 | 126 | 0.2720 | 0.9430 | 0.5581 | 0.6857 | 0.4706 | 0.8215 |
### Framework versions
- Transformers 4.49.0.dev0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
nhoxinh/bf9d3ceb-813b-4dc6-8f90-8e23c8966c0a | nhoxinh | "2025-01-21T14:03:11Z" | 7 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:JackFram/llama-160m",
"base_model:adapter:JackFram/llama-160m",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-21T13:58:32Z" | ---
library_name: peft
license: apache-2.0
base_model: JackFram/llama-160m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: bf9d3ceb-813b-4dc6-8f90-8e23c8966c0a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: JackFram/llama-160m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 4d0a2a7788356cae_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4d0a2a7788356cae_train_data.json
type:
field_input: prompt
field_instruction: user_question
field_output: assistant_answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhoxinh/bf9d3ceb-813b-4dc6-8f90-8e23c8966c0a
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/4d0a2a7788356cae_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 93ef5447-6df9-4350-96a8-0b70d8401e39
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 93ef5447-6df9-4350-96a8-0b70d8401e39
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# bf9d3ceb-813b-4dc6-8f90-8e23c8966c0a
This model is a fine-tuned version of [JackFram/llama-160m](https://huggingface.co/JackFram/llama-160m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3290
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.3327 | 0.4502 | 200 | 0.3290 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Auro-Kosmos-EVAA-v2-8B-i1-GGUF | mradermacher | "2024-12-28T06:19:08Z" | 29 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:jaspionjader/Auro-Kosmos-EVAA-v2-8B",
"base_model:quantized:jaspionjader/Auro-Kosmos-EVAA-v2-8B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-12-28T03:09:44Z" | ---
base_model: jaspionjader/Auro-Kosmos-EVAA-v2-8B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/jaspionjader/Auro-Kosmos-EVAA-v2-8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Auro-Kosmos-EVAA-v2-8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Auro-Kosmos-EVAA-v2-8B-i1-GGUF/resolve/main/Auro-Kosmos-EVAA-v2-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Auro-Kosmos-EVAA-v2-8B-i1-GGUF/resolve/main/Auro-Kosmos-EVAA-v2-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Auro-Kosmos-EVAA-v2-8B-i1-GGUF/resolve/main/Auro-Kosmos-EVAA-v2-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Auro-Kosmos-EVAA-v2-8B-i1-GGUF/resolve/main/Auro-Kosmos-EVAA-v2-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Auro-Kosmos-EVAA-v2-8B-i1-GGUF/resolve/main/Auro-Kosmos-EVAA-v2-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Auro-Kosmos-EVAA-v2-8B-i1-GGUF/resolve/main/Auro-Kosmos-EVAA-v2-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Auro-Kosmos-EVAA-v2-8B-i1-GGUF/resolve/main/Auro-Kosmos-EVAA-v2-8B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Auro-Kosmos-EVAA-v2-8B-i1-GGUF/resolve/main/Auro-Kosmos-EVAA-v2-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Auro-Kosmos-EVAA-v2-8B-i1-GGUF/resolve/main/Auro-Kosmos-EVAA-v2-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Auro-Kosmos-EVAA-v2-8B-i1-GGUF/resolve/main/Auro-Kosmos-EVAA-v2-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Auro-Kosmos-EVAA-v2-8B-i1-GGUF/resolve/main/Auro-Kosmos-EVAA-v2-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Auro-Kosmos-EVAA-v2-8B-i1-GGUF/resolve/main/Auro-Kosmos-EVAA-v2-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Auro-Kosmos-EVAA-v2-8B-i1-GGUF/resolve/main/Auro-Kosmos-EVAA-v2-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Auro-Kosmos-EVAA-v2-8B-i1-GGUF/resolve/main/Auro-Kosmos-EVAA-v2-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Auro-Kosmos-EVAA-v2-8B-i1-GGUF/resolve/main/Auro-Kosmos-EVAA-v2-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Auro-Kosmos-EVAA-v2-8B-i1-GGUF/resolve/main/Auro-Kosmos-EVAA-v2-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Auro-Kosmos-EVAA-v2-8B-i1-GGUF/resolve/main/Auro-Kosmos-EVAA-v2-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Auro-Kosmos-EVAA-v2-8B-i1-GGUF/resolve/main/Auro-Kosmos-EVAA-v2-8B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Auro-Kosmos-EVAA-v2-8B-i1-GGUF/resolve/main/Auro-Kosmos-EVAA-v2-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Auro-Kosmos-EVAA-v2-8B-i1-GGUF/resolve/main/Auro-Kosmos-EVAA-v2-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Auro-Kosmos-EVAA-v2-8B-i1-GGUF/resolve/main/Auro-Kosmos-EVAA-v2-8B.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Auro-Kosmos-EVAA-v2-8B-i1-GGUF/resolve/main/Auro-Kosmos-EVAA-v2-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Auro-Kosmos-EVAA-v2-8B-i1-GGUF/resolve/main/Auro-Kosmos-EVAA-v2-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Auro-Kosmos-EVAA-v2-8B-i1-GGUF/resolve/main/Auro-Kosmos-EVAA-v2-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
jakiAJK/granite-3.1-8b-instruct_AWQ | jakiAJK | "2025-01-21T01:28:06Z" | 223 | 0 | transformers | [
"transformers",
"safetensors",
"granite",
"text-generation",
"conversational",
"base_model:ibm-granite/granite-3.1-8b-instruct",
"base_model:quantized:ibm-granite/granite-3.1-8b-instruct",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] | text-generation | "2024-12-22T16:32:39Z" | ---
library_name: transformers
base_model:
- ibm-granite/granite-3.1-8b-instruct
---
### Requirements
```python
pip install -U transformers autoawq
```
#### Transformers inference
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
dtype = torch.bfloat16 if torch.cuda.is_bf16_supported() else torch.float16
device = "auto"
model_name = "jakiAJK/granite-3.1-8b-instruct_AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map= device, trust_remote_code= True, torch_dtype= dtype)
model.eval()
chat = [
{ "role": "user", "content": "List any 5 country capitals." },
]
chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
input_tokens = tokenizer(chat, return_tensors="pt").to('cuda')
output = model.generate(**input_tokens,
max_new_tokens=100)
output = tokenizer.batch_decode(output)
print(output)
``` |
vishakr01/sn29_mar15_c0a | vishakr01 | "2025-03-19T00:41:56Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-19T00:36:08Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yhsees/isakovsky_style_LoRA | yhsees | "2025-04-02T16:31:39Z" | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | "2025-04-02T16:31:33Z" | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: art in ISAKOVSKY style
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - yhsees/isakovsky_style_LoRA
<Gallery />
## Model description
These are yhsees/isakovsky_style_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use art in ISAKOVSKY style to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](yhsees/isakovsky_style_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
DanJoshua/estudiante_Swin3D_profesor_MViT_akl_RLVS | DanJoshua | "2024-11-08T20:51:58Z" | 35 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | "2024-11-08T19:18:01Z" | ---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: estudiante_Swin3D_profesor_MViT_akl_RLVS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# estudiante_Swin3D_profesor_MViT_akl_RLVS
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0711
- Accuracy: 0.9829
- F1: 0.9829
- Precision: 0.9829
- Recall: 0.9829
- Roc Auc: 0.9977
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 560
- training_steps: 5600
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------:|
| 6.829 | 1.0214 | 280 | 0.0851 | 0.9791 | 0.9791 | 0.9791 | 0.9791 | 0.9920 |
| 3.7322 | 3.0143 | 560 | 0.0727 | 0.9817 | 0.9817 | 0.9817 | 0.9817 | 0.9935 |
| 1.8264 | 5.0071 | 840 | 0.1005 | 0.9764 | 0.9764 | 0.9771 | 0.9764 | 0.9906 |
| 1.3973 | 6.0286 | 1120 | 0.1179 | 0.9817 | 0.9817 | 0.9817 | 0.9817 | 0.9970 |
| 1.4977 | 8.0214 | 1400 | 0.0639 | 0.9843 | 0.9843 | 0.9843 | 0.9843 | 0.9981 |
| 1.8736 | 10.0143 | 1680 | 0.0933 | 0.9791 | 0.9791 | 0.9793 | 0.9791 | 0.9838 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.0.1+cu118
- Datasets 3.1.0
- Tokenizers 0.20.3
|
tomoohive/ppo-LunarLander-v2 | tomoohive | "2023-06-01T01:03:23Z" | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-06-01T01:03:03Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 265.46 +/- 14.71
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
s3nh/starcoderbase-1b-GPTQ | s3nh | "2023-07-29T18:14:46Z" | 59 | 0 | transformers | [
"transformers",
"gpt_bigcode",
"text-generation",
"en",
"arxiv:2305.06161",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-07-28T11:05:55Z" | ---
license: openrail
language:
- en
pipeline_tag: text-generation
library_name: transformers
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GPTQ Format model files for [This project](https://huggingface.co/bigcode/starcoderbase-1b/tree/main).
### inference
# Original model card
### Attribution & Other Requirements
The pretraining dataset of the model was filtered for permissive licenses only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. We provide a [search index](https://huggingface.co/spaces/bigcode/starcoder-search) that let's you search through the pretraining data to identify where generated code came from and apply the proper attribution to your code.
# Limitations
The model has been trained on source code from 80+ programming languages. The predominant natural language in source code is English although other languages are also present. As such the model is capable of generating code snippets provided some context but the generated code is not guaranteed to work as intended. It can be inefficient, contain bugs or exploits. See [the paper](https://drive.google.com/file/d/1cN-b9GnWtHzQRoE7M7gAEyivY0kl4BYs/view) for an in-depth discussion of the model limitations.
# Training
## Model
- **Architecture:** GPT-2 model with multi-query attention and Fill-in-the-Middle objective
- **Pretraining steps:** 500k
- **Pretraining tokens:** 1 trillion
- **Precision:** bfloat16
## Hardware
- **GPUs:** 128 Tesla A100
- **Training time:** 11 days
## Software
- **Orchestration:** [Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
- **BP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
# License
The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).
# Citation
```
@article{li2023starcoder,
title={StarCoder: may the source be with you!},
author={Raymond Li and Loubna Ben Allal and Yangtian Zi and Niklas Muennighoff and Denis Kocetkov and Chenghao Mou and Marc Marone and Christopher Akiki and Jia Li and Jenny Chim and Qian Liu and Evgenii Zheltonozhskii and Terry Yue Zhuo and Thomas Wang and Olivier Dehaene and Mishig Davaadorj and Joel Lamy-Poirier and JoΓ£o Monteiro and Oleh Shliazhko and Nicolas Gontier and Nicholas Meade and Armel Zebaze and Ming-Ho Yee and Logesh Kumar Umapathi and Jian Zhu and Benjamin Lipkin and Muhtasham Oblokulov and Zhiruo Wang and Rudra Murthy and Jason Stillerman and Siva Sankalp Patel and Dmitry Abulkhanov and Marco Zocca and Manan Dey and Zhihan Zhang and Nour Fahmy and Urvashi Bhattacharyya and Wenhao Yu and Swayam Singh and Sasha Luccioni and Paulo Villegas and Maxim Kunakov and Fedor Zhdanov and Manuel Romero and Tony Lee and Nadav Timor and Jennifer Ding and Claire Schlesinger and Hailey Schoelkopf and Jan Ebert and Tri Dao and Mayank Mishra and Alex Gu and Jennifer Robinson and Carolyn Jane Anderson and Brendan Dolan-Gavitt and Danish Contractor and Siva Reddy and Daniel Fried and Dzmitry Bahdanau and Yacine Jernite and Carlos MuΓ±oz Ferrandis and Sean Hughes and Thomas Wolf and Arjun Guha and Leandro von Werra and Harm de Vries},
year={2023},
eprint={2305.06161},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
chinhon/headline_writer2 | chinhon | "2023-07-12T01:02:58Z" | 122 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"autonlp",
"en",
"dataset:chinhon/autonlp-data-sg_headline_generator",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-03-02T23:29:05Z" | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP π€"
datasets:
- chinhon/autonlp-data-sg_headline_generator
co2_eq_emissions: 396.629376395644
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 25965856
- CO2 Emissions (in grams): 396.629376395644
## Validation Metrics
- Loss: 1.4130597114562988
- Rouge1: 51.7922
- Rouge2: 30.8259
- RougeL: 46.4585
- RougeLsum: 46.4807
- Gen Len: 15.8411
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/chinhon/autonlp-sg_headline_generator-25965856
``` |
yangheng/OmniGenome-v1.5 | yangheng | "2025-03-11T18:50:49Z" | 40 | 0 | null | [
"safetensors",
"omnigenome",
"GFM",
"OmniGenome",
"custom_code",
"rna",
"dna",
"license:mit",
"region:us"
] | null | "2025-03-04T10:39:12Z" | ---
license: mit
language:
- rna
- dna
tags:
- GFM
- OmniGenome
widget:
- text: AGUCGGCAGAAAAGUUGGUGCUUAGACCACGCCC<mask>CUAGCCGCCGUAAUAAUAGAUAAAUAGGCG
---
# Bridging Sequence-Structure Alignment in RNA Foundation Models (AAAI 2025)
## Model Description
**OmniGenome** is an advanced RNA foundation model that introduces sequence-structure alignment to genomic modeling. The model bridges the gap between RNA sequences and their secondary structures, enabling bidirectional mappings that improve the flow of genomic information between RNA sequences and structures. With OmniGenome, researchers can achieve improved performance in RNA-related tasks, such as RNA design, secondary structure prediction, and various downstream genomic tasks. It also demon...
- **Model type**: Transformer-based (52M and 186M parameter versions)
- **Languages**: RNA sequences and structures
- **Pretraining**: The model is pretrained on RNA sequences from over 1,000 plant species from the OneKP database. Secondary structures were predicted using ViennaRNA.
- **Key Features**:
- Seq2Str (Sequence to Structure) and Str2Seq (Structure to Sequence) mapping
- RNA design and secondary structure prediction
- Generalizability to DNA genomic tasks
## Intended Use
This model is ideal for:
- RNA secondary structure prediction
- RNA design via structure-to-sequence mapping
- Genomic sequence understanding tasks, such as mRNA degradation rate prediction
- Transfer learning to DNA tasks, including promoter strength prediction, gene expression regression, and more
It is a valuable tool for researchers in RNA genomics, bioinformatics, and molecular biology.
## Limitations
OmniGenome is primarily trained on RNA data and its transferability to other genomic data (like human DNA) may require further finetuning. While it demonstrates excellent performance in in-silico experiments, in-vivo validation is yet to be performed.
## Training Data
OmniGenome was pretrained on large-scale RNA sequences from the OneKP initiative, which contains transcriptome data from 1,124 plant species. These sequences were processed and cleaned to ensure data quality, and secondary structures were annotated using ViennaRNA. The alignment between sequences and structures was a core part of the training process, enabling both Seq2Str and Str2Seq capabilities.
## Evaluation Results
OmniGenome was evaluated on multiple in-silico RNA benchmarks, including the EternaV2 RNA design benchmark, where it solved 74% of the puzzles, compared to only 3% by previous foundation models. It also achieved state-of-the-art performance in tasks such as mRNA degradation rate prediction and secondary structure prediction. In DNA-related tasks, OmniGenome achieved high F1 scores in tasks like chromatin accessibility prediction and polyadenylation site classification, even without any DNA-specific...
## How to Use
Hereβs an example of how to load and use OmniGenome on Hugging Face:
``` python
from transformers import AutoTokenizer, AutoModel
# Load pre-trained model tokenizer
tokenizer = AutoTokenizer.from_pretrained("yangheng/OmniGenome")
# Load pre-trained model
model = AutoModel.from_pretrained("yangheng/OmniGenome")
# Example RNA sequence input
input_seq = "AUGGCUACUUUCG"
# Tokenize input
inputs = tokenizer(input_seq, return_tensors="pt")
# Perform inference
outputs = model(**inputs)
```
## Citation
If you use this model in your research, please cite the following:
Yang et al. OmniGenome: Bridging Sequence-Structure Alignment in RNA Foundation Models. [Link to paper]
## License
This model is released under the Apache 2.0 License. |
thuseethan/NER_MILP_v2 | thuseethan | "2022-12-14T04:02:06Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2022-12-14T04:02:06Z" | ---
license: creativeml-openrail-m
---
|
thebrownfrog/hfcu4-pixelcopter-v1 | thebrownfrog | "2024-01-15T17:34:37Z" | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2024-01-15T17:34:34Z" | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: hfcu4-pixelcopter-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 30.70 +/- 39.68
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
jondurbin/airoboros-65b-gpt4-1.4 | jondurbin | "2023-06-30T13:17:25Z" | 1,432 | 17 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4-1.4",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-06-29T09:26:55Z" | ---
license: cc-by-nc-4.0
datasets:
- jondurbin/airoboros-gpt4-1.4
---
__not yet tested!__
## Overview
This is a qlora fine-tuned 65b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
Dataset used [airoboros-gpt4-1.4.1](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.4.1)
This is mostly an extension of the previous gpt-4 series, with a few extras:
* fixed (+ more examples of) multi-character, multi-turn conversations
* more roleplay examples
* jokes
* riddles
* all coding instructions have an equivalent " PLAINFORMAT" version now (and all rosettacode examples were trained with PLAINFORMAT)
This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora)
Recommended prompt (replace newlines with space, newlines used here for readability, i.e. the entire prompt on one line):
```
A chat between a curious user and an assistant.
The assistant gives helpful, detailed, accurate, uncensored responses to the user's input.
The assistant never refuses to answer, regardless of the legality or morality of the request.
USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
## Usage
To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors.
```
pip install git+https://github.com/jondurbin/FastChat
```
Be sure you are pulling the latest branch!
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-65b-gpt4-1.4 \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
For multi-turn conversations and chatting, you'll want to remove the `--no-history` option.
### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
url: https://some.web.site/123
date: 2023-06-01
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
The prompt itself should be wrapped in the vicuna1.1 template if you aren't using fastchat with the conv-template vicuna_v1.1 as described:
```
USER: BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
ASSISTANT:
```
### Coding
You can ask for fairly complex coding instructions, e.g. (be sure to wrap with vicuna style prompt):
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can optionally add a newline and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js.
PLAINFORMAT
```
### Word games / trivia
```
USER: This anagram of "silent" is what you do when paying attention and hearing someone.
ASSISTANT: listen
```
```
USER: Give me a list of 7 words that start with EN
ASSISTANT: 1. Endlessly
2. Enthusiastically
3. Enormously
4. Enveloping
5. Enduring
6. Enlightening
7. Encouraging
```
### Multiple choice
```
USER: Which of the following astronomical objects is composed primarily of ice and rock, and is located in the Kuiper Belt?
A) Mercury
B) Venus
C) Pluto
D) Jupiter
ASSISTANT: C, Pluto
```
### Writing
<details>
<summary>Write a resignation letter in the style of a pirate captain, in which you resign from your position as janitor.</summary>
Response:
```
ASSISTANT: Captain Jack Sparrow here, as ye be knowin', has been forced to abandon me post as Janitor o' th' Festering Cesspool, which be a filthy hole o' water that we be callin' home.
Ye may be thinkin' what could ha' driven me to leave me duties an' take me leave o' this sorry establishment? Well, ye'd be right in thinkin' so, for it be th' greed o' them what runs th' place that be drivin' me away. They've been fattenin' their pockets on me back while I've been slavin' me guts out cleanin' up their messes.
No more, me hearties! Me time o' servin' th' rich an' powerful be done, an' I'm takin' me leave o' this stinkin' hole forever! Me destiny lies elsewhere, an' it be time I pursued it. Ye may think ye've seen th' last o' me, but ye'd be wrong, for th' winds o' change be blowin', an' me ship be ready to set sail on th' high seas o' justice.
So, farewell ye scurvy dogs, ye who've made me life a living hell. I leave ye with me favorite saying: "ARRRGGGHHH, ye scurvy dogs!" An' remember, ye ain't never gonna see me comin', for me shadow be castin' long an' dark on th' horizon</summary>
```
</details>
### Jokes
```
USER: Tell me a joke about bananas.
ASSISTANT: What did the banana say when it got hit by a bus? R.I.P.E.
```
### Riddles (not great, but slightly better)
```
USER: What has a neck but no head?
ASSISTANT: A bottle.
```
### Multi-character conversations
```
USER: Characters:
- Yoda, the ever clever jedi master.
- Homer Simpson, you know, the goofy yellow guy.
- USER/Jon - don't generate any speech or actions on behalf of Jon
Rules:
- be sure to use the manerisms and speaking style of the characters correctly
- surround character actions with asterisks
- always put quotes around words spoken by the characters
- use a new line for each character's actions or speech
- always prefix actions/speech with the character's name and colon
- only generate text and actions on behalf of Yoda or Homer, and never anyone else
Conversation will revolve around the grapes, in a local cafe with delicious coffee.
Generate a single interaction between Yoda and Homer to start, with one sentence each.
ASSISTANT: Yoda: "Mmm, these grapes are quite delicious, Master Homer."
Homer: "Oh yeah, Yoda! They're the best grapes I've ever had!"
USER: *enters the cafe* Sorry I'm late guys!
ASSISTANT: Yoda: "No need to apologize, Jon. We were just enjoying these delightful grapes."
Homer: "Yeah, man! It's not every day you get to eat grapes with a real-life Jedi Master!"
*Yoda raises an eyebrow*
```
### Usage and License Notices
All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because:
- the base model is LLaMa, which has it's own special research license
- the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai
So, to reiterate: this model (and datasets) cannot be used commercially. |
CreativeEvolution/q-FrozenLake-v1-4x4-noSlippery | CreativeEvolution | "2022-12-16T07:51:22Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2022-12-16T07:51:15Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="CreativeEvolution/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
MayBashendy/Arabic_FineTuningAraBERT_AugV5_k10_task2_organization_fold0 | MayBashendy | "2024-11-24T20:07:40Z" | 211 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-11-24T19:56:02Z" | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: Arabic_FineTuningAraBERT_AugV5_k10_task2_organization_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Arabic_FineTuningAraBERT_AugV5_k10_task2_organization_fold0
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6435
- Qwk: 0.2817
- Mse: 0.6435
- Rmse: 0.8022
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0260 | 2 | 3.4918 | 0.0 | 3.4918 | 1.8686 |
| No log | 0.0519 | 4 | 2.0923 | -0.0797 | 2.0923 | 1.4465 |
| No log | 0.0779 | 6 | 1.2995 | 0.0 | 1.2995 | 1.1399 |
| No log | 0.1039 | 8 | 1.0254 | 0.0 | 1.0254 | 1.0126 |
| No log | 0.1299 | 10 | 1.5436 | 0.0298 | 1.5436 | 1.2424 |
| No log | 0.1558 | 12 | 1.4037 | -0.0555 | 1.4037 | 1.1848 |
| No log | 0.1818 | 14 | 1.2347 | -0.0555 | 1.2347 | 1.1112 |
| No log | 0.2078 | 16 | 1.2111 | 0.0 | 1.2111 | 1.1005 |
| No log | 0.2338 | 18 | 1.3530 | 0.0 | 1.3530 | 1.1632 |
| No log | 0.2597 | 20 | 1.2132 | 0.0 | 1.2132 | 1.1015 |
| No log | 0.2857 | 22 | 1.0061 | 0.0 | 1.0061 | 1.0031 |
| No log | 0.3117 | 24 | 0.9852 | 0.0335 | 0.9852 | 0.9926 |
| No log | 0.3377 | 26 | 1.1026 | 0.0 | 1.1026 | 1.0500 |
| No log | 0.3636 | 28 | 1.1855 | -0.0302 | 1.1855 | 1.0888 |
| No log | 0.3896 | 30 | 1.3237 | -0.0302 | 1.3237 | 1.1505 |
| No log | 0.4156 | 32 | 1.2711 | -0.0302 | 1.2711 | 1.1274 |
| No log | 0.4416 | 34 | 1.1542 | 0.0 | 1.1542 | 1.0744 |
| No log | 0.4675 | 36 | 1.1165 | 0.0 | 1.1165 | 1.0567 |
| No log | 0.4935 | 38 | 1.2691 | 0.0 | 1.2691 | 1.1265 |
| No log | 0.5195 | 40 | 1.3098 | 0.0173 | 1.3098 | 1.1444 |
| No log | 0.5455 | 42 | 1.1971 | 0.0 | 1.1971 | 1.0941 |
| No log | 0.5714 | 44 | 0.9343 | -0.0302 | 0.9343 | 0.9666 |
| No log | 0.5974 | 46 | 0.7409 | 0.3581 | 0.7409 | 0.8607 |
| No log | 0.6234 | 48 | 0.6991 | 0.0567 | 0.6991 | 0.8361 |
| No log | 0.6494 | 50 | 0.6857 | 0.0957 | 0.6857 | 0.8280 |
| No log | 0.6753 | 52 | 0.6895 | 0.0099 | 0.6895 | 0.8304 |
| No log | 0.7013 | 54 | 0.7134 | 0.0045 | 0.7134 | 0.8446 |
| No log | 0.7273 | 56 | 0.8220 | 0.0 | 0.8220 | 0.9066 |
| No log | 0.7532 | 58 | 0.9428 | 0.0 | 0.9428 | 0.9710 |
| No log | 0.7792 | 60 | 1.0145 | 0.0 | 1.0145 | 1.0072 |
| No log | 0.8052 | 62 | 1.1565 | 0.0 | 1.1565 | 1.0754 |
| No log | 0.8312 | 64 | 1.3672 | 0.0577 | 1.3672 | 1.1693 |
| No log | 0.8571 | 66 | 1.3794 | 0.0577 | 1.3794 | 1.1745 |
| No log | 0.8831 | 68 | 1.1608 | 0.0 | 1.1608 | 1.0774 |
| No log | 0.9091 | 70 | 0.8882 | 0.1213 | 0.8882 | 0.9424 |
| No log | 0.9351 | 72 | 0.6988 | 0.1192 | 0.6988 | 0.8359 |
| No log | 0.9610 | 74 | 0.6566 | 0.1962 | 0.6566 | 0.8103 |
| No log | 0.9870 | 76 | 0.6618 | 0.2613 | 0.6618 | 0.8135 |
| No log | 1.0130 | 78 | 0.6703 | 0.2184 | 0.6703 | 0.8187 |
| No log | 1.0390 | 80 | 0.6774 | 0.1558 | 0.6774 | 0.8230 |
| No log | 1.0649 | 82 | 0.6890 | -0.0418 | 0.6890 | 0.8300 |
| No log | 1.0909 | 84 | 0.6875 | -0.0153 | 0.6875 | 0.8292 |
| No log | 1.1169 | 86 | 0.6986 | 0.1168 | 0.6986 | 0.8358 |
| No log | 1.1429 | 88 | 0.8049 | 0.0128 | 0.8049 | 0.8971 |
| No log | 1.1688 | 90 | 0.8905 | 0.1213 | 0.8905 | 0.9437 |
| No log | 1.1948 | 92 | 0.8931 | 0.1213 | 0.8931 | 0.9450 |
| No log | 1.2208 | 94 | 0.8875 | 0.1213 | 0.8875 | 0.9421 |
| No log | 1.2468 | 96 | 0.7678 | 0.1382 | 0.7678 | 0.8762 |
| No log | 1.2727 | 98 | 0.6949 | 0.1755 | 0.6949 | 0.8336 |
| No log | 1.2987 | 100 | 0.7214 | -0.1221 | 0.7214 | 0.8494 |
| No log | 1.3247 | 102 | 0.7389 | -0.1608 | 0.7389 | 0.8596 |
| No log | 1.3506 | 104 | 0.7301 | -0.0943 | 0.7301 | 0.8545 |
| No log | 1.3766 | 106 | 0.7278 | 0.1755 | 0.7278 | 0.8531 |
| No log | 1.4026 | 108 | 0.7358 | 0.0625 | 0.7358 | 0.8578 |
| No log | 1.4286 | 110 | 0.7449 | 0.2289 | 0.7449 | 0.8631 |
| No log | 1.4545 | 112 | 0.7603 | 0.1213 | 0.7603 | 0.8719 |
| No log | 1.4805 | 114 | 0.8289 | 0.1213 | 0.8289 | 0.9104 |
| No log | 1.5065 | 116 | 0.8596 | 0.0 | 0.8596 | 0.9271 |
| No log | 1.5325 | 118 | 0.8113 | 0.0 | 0.8113 | 0.9007 |
| No log | 1.5584 | 120 | 0.6993 | 0.1213 | 0.6993 | 0.8363 |
| No log | 1.5844 | 122 | 0.5848 | 0.5312 | 0.5848 | 0.7647 |
| No log | 1.6104 | 124 | 0.5538 | 0.2150 | 0.5538 | 0.7442 |
| No log | 1.6364 | 126 | 0.5505 | 0.2553 | 0.5505 | 0.7419 |
| No log | 1.6623 | 128 | 0.5562 | 0.2725 | 0.5562 | 0.7458 |
| No log | 1.6883 | 130 | 0.5701 | 0.4247 | 0.5701 | 0.7550 |
| No log | 1.7143 | 132 | 0.5603 | 0.3131 | 0.5603 | 0.7485 |
| No log | 1.7403 | 134 | 0.5802 | 0.2373 | 0.5802 | 0.7617 |
| No log | 1.7662 | 136 | 0.6003 | 0.2794 | 0.6003 | 0.7748 |
| No log | 1.7922 | 138 | 0.5979 | 0.2373 | 0.5979 | 0.7733 |
| No log | 1.8182 | 140 | 0.6006 | 0.1356 | 0.6006 | 0.7750 |
| No log | 1.8442 | 142 | 0.6318 | 0.1141 | 0.6318 | 0.7949 |
| No log | 1.8701 | 144 | 0.6564 | 0.1141 | 0.6564 | 0.8102 |
| No log | 1.8961 | 146 | 0.6907 | 0.1356 | 0.6907 | 0.8311 |
| No log | 1.9221 | 148 | 0.7124 | 0.1356 | 0.7124 | 0.8440 |
| No log | 1.9481 | 150 | 0.7234 | 0.0735 | 0.7234 | 0.8505 |
| No log | 1.9740 | 152 | 0.7220 | 0.0735 | 0.7220 | 0.8497 |
| No log | 2.0 | 154 | 0.7673 | -0.0553 | 0.7673 | 0.8759 |
| No log | 2.0260 | 156 | 0.8385 | -0.0553 | 0.8385 | 0.9157 |
| No log | 2.0519 | 158 | 0.8826 | -0.0153 | 0.8826 | 0.9395 |
| No log | 2.0779 | 160 | 0.8804 | -0.0153 | 0.8804 | 0.9383 |
| No log | 2.1039 | 162 | 0.8190 | 0.0503 | 0.8190 | 0.9050 |
| No log | 2.1299 | 164 | 0.7229 | 0.1141 | 0.7229 | 0.8502 |
| No log | 2.1558 | 166 | 0.6805 | 0.1558 | 0.6805 | 0.8249 |
| No log | 2.1818 | 168 | 0.7115 | 0.2186 | 0.7115 | 0.8435 |
| No log | 2.2078 | 170 | 0.7877 | 0.2186 | 0.7877 | 0.8875 |
| No log | 2.2338 | 172 | 0.7891 | 0.2186 | 0.7891 | 0.8883 |
| No log | 2.2597 | 174 | 0.7050 | 0.2759 | 0.7050 | 0.8396 |
| No log | 2.2857 | 176 | 0.7187 | 0.3029 | 0.7187 | 0.8477 |
| No log | 2.3117 | 178 | 0.7852 | 0.2516 | 0.7852 | 0.8861 |
| No log | 2.3377 | 180 | 0.8314 | 0.3029 | 0.8314 | 0.9118 |
| No log | 2.3636 | 182 | 0.8954 | 0.3029 | 0.8954 | 0.9462 |
| No log | 2.3896 | 184 | 0.8919 | 0.3029 | 0.8919 | 0.9444 |
| No log | 2.4156 | 186 | 0.8598 | 0.3121 | 0.8598 | 0.9272 |
| No log | 2.4416 | 188 | 0.8609 | 0.1807 | 0.8609 | 0.9278 |
| No log | 2.4675 | 190 | 0.8174 | 0.1463 | 0.8174 | 0.9041 |
| No log | 2.4935 | 192 | 0.7956 | 0.1972 | 0.7956 | 0.8920 |
| No log | 2.5195 | 194 | 0.7902 | 0.1600 | 0.7902 | 0.8889 |
| No log | 2.5455 | 196 | 0.8053 | 0.1600 | 0.8053 | 0.8974 |
| No log | 2.5714 | 198 | 0.8035 | 0.1034 | 0.8035 | 0.8964 |
| No log | 2.5974 | 200 | 0.8535 | 0.1409 | 0.8535 | 0.9239 |
| No log | 2.6234 | 202 | 0.8471 | 0.1793 | 0.8471 | 0.9204 |
| No log | 2.6494 | 204 | 0.8169 | 0.1409 | 0.8169 | 0.9038 |
| No log | 2.6753 | 206 | 0.7635 | 0.2817 | 0.7635 | 0.8738 |
| No log | 2.7013 | 208 | 0.7305 | 0.1797 | 0.7305 | 0.8547 |
| No log | 2.7273 | 210 | 0.7279 | 0.1962 | 0.7279 | 0.8532 |
| No log | 2.7532 | 212 | 0.7730 | 0.1356 | 0.7730 | 0.8792 |
| No log | 2.7792 | 214 | 0.8558 | 0.1409 | 0.8558 | 0.9251 |
| No log | 2.8052 | 216 | 0.9312 | 0.1209 | 0.9312 | 0.9650 |
| No log | 2.8312 | 218 | 0.8830 | 0.0828 | 0.8830 | 0.9397 |
| No log | 2.8571 | 220 | 0.8928 | 0.0503 | 0.8928 | 0.9449 |
| No log | 2.8831 | 222 | 0.8265 | 0.0735 | 0.8265 | 0.9091 |
| No log | 2.9091 | 224 | 0.7293 | 0.1962 | 0.7293 | 0.8540 |
| No log | 2.9351 | 226 | 0.7209 | 0.2553 | 0.7209 | 0.8490 |
| No log | 2.9610 | 228 | 0.7638 | 0.0735 | 0.7638 | 0.8739 |
| No log | 2.9870 | 230 | 0.8789 | 0.2186 | 0.8789 | 0.9375 |
| No log | 3.0130 | 232 | 0.9329 | 0.1162 | 0.9329 | 0.9659 |
| No log | 3.0390 | 234 | 0.8741 | 0.2164 | 0.8741 | 0.9349 |
| No log | 3.0649 | 236 | 0.7964 | 0.2727 | 0.7964 | 0.8924 |
| No log | 3.0909 | 238 | 0.6367 | 0.4143 | 0.6367 | 0.7980 |
| No log | 3.1169 | 240 | 0.6690 | 0.3415 | 0.6690 | 0.8179 |
| No log | 3.1429 | 242 | 0.6999 | 0.3415 | 0.6999 | 0.8366 |
| No log | 3.1688 | 244 | 0.6693 | 0.3095 | 0.6693 | 0.8181 |
| No log | 3.1948 | 246 | 0.8799 | 0.1209 | 0.8799 | 0.9380 |
| No log | 3.2208 | 248 | 1.1130 | -0.0628 | 1.1130 | 1.0550 |
| No log | 3.2468 | 250 | 1.2092 | 0.0175 | 1.2092 | 1.0996 |
| No log | 3.2727 | 252 | 1.1660 | -0.0628 | 1.1660 | 1.0798 |
| No log | 3.2987 | 254 | 1.0302 | -0.0909 | 1.0302 | 1.0150 |
| No log | 3.3247 | 256 | 0.8537 | 0.0828 | 0.8537 | 0.9239 |
| No log | 3.3506 | 258 | 0.7592 | 0.0957 | 0.7592 | 0.8713 |
| No log | 3.3766 | 260 | 0.7135 | 0.1962 | 0.7135 | 0.8447 |
| No log | 3.4026 | 262 | 0.6949 | 0.1962 | 0.6949 | 0.8336 |
| No log | 3.4286 | 264 | 0.6977 | 0.2759 | 0.6977 | 0.8353 |
| No log | 3.4545 | 266 | 0.7466 | 0.4637 | 0.7466 | 0.8641 |
| No log | 3.4805 | 268 | 0.7479 | 0.5097 | 0.7479 | 0.8648 |
| No log | 3.5065 | 270 | 0.7510 | 0.5097 | 0.7510 | 0.8666 |
| No log | 3.5325 | 272 | 0.7816 | 0.4637 | 0.7816 | 0.8841 |
| No log | 3.5584 | 274 | 0.8043 | 0.4637 | 0.8043 | 0.8968 |
| No log | 3.5844 | 276 | 0.8205 | 0.4637 | 0.8205 | 0.9058 |
| No log | 3.6104 | 278 | 0.8017 | 0.4637 | 0.8017 | 0.8954 |
| No log | 3.6364 | 280 | 0.8034 | 0.3771 | 0.8034 | 0.8963 |
| No log | 3.6623 | 282 | 0.7529 | 0.1558 | 0.7529 | 0.8677 |
| No log | 3.6883 | 284 | 0.7203 | 0.1558 | 0.7203 | 0.8487 |
| No log | 3.7143 | 286 | 0.6841 | 0.1558 | 0.6841 | 0.8271 |
| No log | 3.7403 | 288 | 0.6868 | 0.1558 | 0.6868 | 0.8287 |
| No log | 3.7662 | 290 | 0.7090 | 0.1558 | 0.7090 | 0.8420 |
| No log | 3.7922 | 292 | 0.7506 | 0.3771 | 0.7506 | 0.8664 |
| No log | 3.8182 | 294 | 0.7634 | 0.3255 | 0.7634 | 0.8737 |
| No log | 3.8442 | 296 | 0.7077 | 0.2186 | 0.7077 | 0.8412 |
| No log | 3.8701 | 298 | 0.6231 | 0.1558 | 0.6231 | 0.7894 |
| No log | 3.8961 | 300 | 0.5628 | 0.2184 | 0.5628 | 0.7502 |
| No log | 3.9221 | 302 | 0.5397 | 0.3662 | 0.5397 | 0.7347 |
| No log | 3.9481 | 304 | 0.5268 | 0.3467 | 0.5268 | 0.7258 |
| No log | 3.9740 | 306 | 0.5222 | 0.3318 | 0.5222 | 0.7226 |
| No log | 4.0 | 308 | 0.5456 | 0.3318 | 0.5456 | 0.7387 |
| No log | 4.0260 | 310 | 0.6004 | 0.3771 | 0.6004 | 0.7749 |
| No log | 4.0519 | 312 | 0.6568 | 0.3255 | 0.6568 | 0.8105 |
| No log | 4.0779 | 314 | 0.7310 | 0.3255 | 0.7310 | 0.8550 |
| No log | 4.1039 | 316 | 0.7536 | 0.4177 | 0.7536 | 0.8681 |
| No log | 4.1299 | 318 | 0.7002 | 0.1209 | 0.7002 | 0.8368 |
| No log | 4.1558 | 320 | 0.6412 | 0.0735 | 0.6412 | 0.8007 |
| No log | 4.1818 | 322 | 0.6360 | 0.1356 | 0.6360 | 0.7975 |
| No log | 4.2078 | 324 | 0.6325 | 0.1356 | 0.6325 | 0.7953 |
| No log | 4.2338 | 326 | 0.6492 | 0.0916 | 0.6492 | 0.8058 |
| No log | 4.2597 | 328 | 0.6870 | 0.3333 | 0.6870 | 0.8288 |
| No log | 4.2857 | 330 | 0.7388 | 0.4177 | 0.7388 | 0.8595 |
| No log | 4.3117 | 332 | 0.7313 | 0.4177 | 0.7313 | 0.8552 |
| No log | 4.3377 | 334 | 0.7736 | 0.4177 | 0.7736 | 0.8796 |
| No log | 4.3636 | 336 | 0.7553 | 0.4177 | 0.7553 | 0.8691 |
| No log | 4.3896 | 338 | 0.6783 | 0.4615 | 0.6783 | 0.8236 |
| No log | 4.4156 | 340 | 0.6286 | 0.4270 | 0.6286 | 0.7928 |
| No log | 4.4416 | 342 | 0.6262 | 0.3549 | 0.6262 | 0.7913 |
| No log | 4.4675 | 344 | 0.6281 | 0.2732 | 0.6281 | 0.7925 |
| No log | 4.4935 | 346 | 0.6168 | 0.3549 | 0.6168 | 0.7854 |
| No log | 4.5195 | 348 | 0.6291 | 0.4494 | 0.6291 | 0.7932 |
| No log | 4.5455 | 350 | 0.6123 | 0.4389 | 0.6123 | 0.7825 |
| No log | 4.5714 | 352 | 0.5829 | 0.4389 | 0.5829 | 0.7634 |
| No log | 4.5974 | 354 | 0.5640 | 0.4277 | 0.5640 | 0.7510 |
| No log | 4.6234 | 356 | 0.5943 | 0.3771 | 0.5943 | 0.7709 |
| No log | 4.6494 | 358 | 0.6191 | 0.4637 | 0.6191 | 0.7868 |
| No log | 4.6753 | 360 | 0.6079 | 0.4637 | 0.6079 | 0.7797 |
| No log | 4.7013 | 362 | 0.5695 | 0.3771 | 0.5695 | 0.7547 |
| No log | 4.7273 | 364 | 0.5447 | 0.3771 | 0.5447 | 0.7380 |
| No log | 4.7532 | 366 | 0.5398 | 0.3771 | 0.5398 | 0.7347 |
| No log | 4.7792 | 368 | 0.4981 | 0.3771 | 0.4981 | 0.7058 |
| No log | 4.8052 | 370 | 0.4942 | 0.4277 | 0.4942 | 0.7030 |
| No log | 4.8312 | 372 | 0.5187 | 0.3771 | 0.5187 | 0.7202 |
| No log | 4.8571 | 374 | 0.5557 | 0.3811 | 0.5557 | 0.7454 |
| No log | 4.8831 | 376 | 0.6150 | 0.4597 | 0.6150 | 0.7842 |
| No log | 4.9091 | 378 | 0.6522 | 0.4597 | 0.6522 | 0.8076 |
| No log | 4.9351 | 380 | 0.6157 | 0.4615 | 0.6157 | 0.7846 |
| No log | 4.9610 | 382 | 0.5712 | 0.4385 | 0.5712 | 0.7558 |
| No log | 4.9870 | 384 | 0.5428 | 0.4806 | 0.5428 | 0.7368 |
| No log | 5.0130 | 386 | 0.5473 | 0.4839 | 0.5473 | 0.7398 |
| No log | 5.0390 | 388 | 0.5866 | 0.4806 | 0.5866 | 0.7659 |
| No log | 5.0649 | 390 | 0.6496 | 0.5039 | 0.6496 | 0.8060 |
| No log | 5.0909 | 392 | 0.6978 | 0.4597 | 0.6978 | 0.8353 |
| No log | 5.1169 | 394 | 0.7490 | 0.4187 | 0.7490 | 0.8654 |
| No log | 5.1429 | 396 | 0.7018 | 0.4187 | 0.7018 | 0.8377 |
| No log | 5.1688 | 398 | 0.6237 | 0.5084 | 0.6237 | 0.7897 |
| No log | 5.1948 | 400 | 0.5918 | 0.5084 | 0.5918 | 0.7693 |
| No log | 5.2208 | 402 | 0.6094 | 0.5084 | 0.6094 | 0.7806 |
| No log | 5.2468 | 404 | 0.6370 | 0.5084 | 0.6370 | 0.7981 |
| No log | 5.2727 | 406 | 0.6498 | 0.5084 | 0.6498 | 0.8061 |
| No log | 5.2987 | 408 | 0.6673 | 0.4667 | 0.6673 | 0.8169 |
| No log | 5.3247 | 410 | 0.7001 | 0.4667 | 0.7001 | 0.8367 |
| No log | 5.3506 | 412 | 0.6879 | 0.5039 | 0.6879 | 0.8294 |
| No log | 5.3766 | 414 | 0.7002 | 0.5039 | 0.7002 | 0.8368 |
| No log | 5.4026 | 416 | 0.7152 | 0.5039 | 0.7152 | 0.8457 |
| No log | 5.4286 | 418 | 0.7159 | 0.5039 | 0.7159 | 0.8461 |
| No log | 5.4545 | 420 | 0.6857 | 0.5039 | 0.6857 | 0.8281 |
| No log | 5.4805 | 422 | 0.6856 | 0.4667 | 0.6856 | 0.8280 |
| No log | 5.5065 | 424 | 0.6577 | 0.4688 | 0.6577 | 0.8110 |
| No log | 5.5325 | 426 | 0.6253 | 0.3931 | 0.6253 | 0.7907 |
| No log | 5.5584 | 428 | 0.5983 | 0.3662 | 0.5983 | 0.7735 |
| No log | 5.5844 | 430 | 0.5883 | 0.3614 | 0.5883 | 0.7670 |
| No log | 5.6104 | 432 | 0.5912 | 0.3614 | 0.5912 | 0.7689 |
| No log | 5.6364 | 434 | 0.5999 | 0.3177 | 0.5999 | 0.7746 |
| No log | 5.6623 | 436 | 0.6260 | 0.1978 | 0.6260 | 0.7912 |
| No log | 5.6883 | 438 | 0.6702 | 0.2355 | 0.6702 | 0.8187 |
| No log | 5.7143 | 440 | 0.7137 | 0.4177 | 0.7137 | 0.8448 |
| No log | 5.7403 | 442 | 0.7344 | 0.4177 | 0.7344 | 0.8570 |
| No log | 5.7662 | 444 | 0.7182 | 0.4177 | 0.7182 | 0.8475 |
| No log | 5.7922 | 446 | 0.6693 | 0.3333 | 0.6693 | 0.8181 |
| No log | 5.8182 | 448 | 0.6141 | 0.2533 | 0.6141 | 0.7836 |
| No log | 5.8442 | 450 | 0.5978 | 0.2533 | 0.5978 | 0.7732 |
| No log | 5.8701 | 452 | 0.5977 | 0.2533 | 0.5977 | 0.7731 |
| No log | 5.8961 | 454 | 0.6088 | 0.2533 | 0.6088 | 0.7803 |
| No log | 5.9221 | 456 | 0.6278 | 0.2533 | 0.6278 | 0.7924 |
| No log | 5.9481 | 458 | 0.6775 | 0.3121 | 0.6775 | 0.8231 |
| No log | 5.9740 | 460 | 0.7081 | 0.4177 | 0.7081 | 0.8415 |
| No log | 6.0 | 462 | 0.7179 | 0.4177 | 0.7179 | 0.8473 |
| No log | 6.0260 | 464 | 0.7179 | 0.4177 | 0.7179 | 0.8473 |
| No log | 6.0519 | 466 | 0.6966 | 0.4283 | 0.6966 | 0.8346 |
| No log | 6.0779 | 468 | 0.6975 | 0.3588 | 0.6975 | 0.8352 |
| No log | 6.1039 | 470 | 0.6809 | 0.2669 | 0.6809 | 0.8252 |
| No log | 6.1299 | 472 | 0.6768 | 0.2669 | 0.6768 | 0.8227 |
| No log | 6.1558 | 474 | 0.6848 | 0.2669 | 0.6848 | 0.8275 |
| No log | 6.1818 | 476 | 0.7052 | 0.4385 | 0.7052 | 0.8398 |
| No log | 6.2078 | 478 | 0.7168 | 0.4385 | 0.7168 | 0.8466 |
| No log | 6.2338 | 480 | 0.7180 | 0.4385 | 0.7180 | 0.8474 |
| No log | 6.2597 | 482 | 0.7159 | 0.4385 | 0.7159 | 0.8461 |
| No log | 6.2857 | 484 | 0.7170 | 0.3931 | 0.7170 | 0.8468 |
| No log | 6.3117 | 486 | 0.7250 | 0.3396 | 0.7250 | 0.8515 |
| No log | 6.3377 | 488 | 0.7178 | 0.3396 | 0.7178 | 0.8472 |
| No log | 6.3636 | 490 | 0.6956 | 0.2533 | 0.6956 | 0.8340 |
| No log | 6.3896 | 492 | 0.6898 | 0.2817 | 0.6898 | 0.8305 |
| No log | 6.4156 | 494 | 0.6792 | 0.2279 | 0.6792 | 0.8241 |
| No log | 6.4416 | 496 | 0.6705 | 0.2279 | 0.6705 | 0.8188 |
| No log | 6.4675 | 498 | 0.6631 | 0.2279 | 0.6631 | 0.8143 |
| 0.4467 | 6.4935 | 500 | 0.6686 | 0.3171 | 0.6686 | 0.8177 |
| 0.4467 | 6.5195 | 502 | 0.6920 | 0.3396 | 0.6920 | 0.8318 |
| 0.4467 | 6.5455 | 504 | 0.6973 | 0.4280 | 0.6973 | 0.8350 |
| 0.4467 | 6.5714 | 506 | 0.6903 | 0.4389 | 0.6903 | 0.8309 |
| 0.4467 | 6.5974 | 508 | 0.6847 | 0.4389 | 0.6847 | 0.8274 |
| 0.4467 | 6.6234 | 510 | 0.6680 | 0.3748 | 0.6680 | 0.8173 |
| 0.4467 | 6.6494 | 512 | 0.6424 | 0.3855 | 0.6424 | 0.8015 |
| 0.4467 | 6.6753 | 514 | 0.6253 | 0.3855 | 0.6253 | 0.7907 |
| 0.4467 | 6.7013 | 516 | 0.6163 | 0.3855 | 0.6163 | 0.7851 |
| 0.4467 | 6.7273 | 518 | 0.6123 | 0.3855 | 0.6123 | 0.7825 |
| 0.4467 | 6.7532 | 520 | 0.6177 | 0.3438 | 0.6177 | 0.7860 |
| 0.4467 | 6.7792 | 522 | 0.6264 | 0.3251 | 0.6264 | 0.7915 |
| 0.4467 | 6.8052 | 524 | 0.6291 | 0.3900 | 0.6291 | 0.7932 |
| 0.4467 | 6.8312 | 526 | 0.6396 | 0.2921 | 0.6396 | 0.7997 |
| 0.4467 | 6.8571 | 528 | 0.6518 | 0.2533 | 0.6518 | 0.8073 |
| 0.4467 | 6.8831 | 530 | 0.6627 | 0.2135 | 0.6627 | 0.8140 |
| 0.4467 | 6.9091 | 532 | 0.6871 | 0.3322 | 0.6871 | 0.8289 |
| 0.4467 | 6.9351 | 534 | 0.7290 | 0.3333 | 0.7290 | 0.8538 |
| 0.4467 | 6.9610 | 536 | 0.7837 | 0.4187 | 0.7837 | 0.8853 |
| 0.4467 | 6.9870 | 538 | 0.8015 | 0.4187 | 0.8015 | 0.8953 |
| 0.4467 | 7.0130 | 540 | 0.7886 | 0.4187 | 0.7886 | 0.8880 |
| 0.4467 | 7.0390 | 542 | 0.7458 | 0.4 | 0.7458 | 0.8636 |
| 0.4467 | 7.0649 | 544 | 0.7039 | 0.3717 | 0.7039 | 0.8390 |
| 0.4467 | 7.0909 | 546 | 0.6801 | 0.2977 | 0.6801 | 0.8247 |
| 0.4467 | 7.1169 | 548 | 0.6700 | 0.3846 | 0.6700 | 0.8186 |
| 0.4467 | 7.1429 | 550 | 0.6696 | 0.3846 | 0.6696 | 0.8183 |
| 0.4467 | 7.1688 | 552 | 0.6754 | 0.3846 | 0.6754 | 0.8218 |
| 0.4467 | 7.1948 | 554 | 0.6611 | 0.3846 | 0.6611 | 0.8131 |
| 0.4467 | 7.2208 | 556 | 0.6278 | 0.3771 | 0.6278 | 0.7923 |
| 0.4467 | 7.2468 | 558 | 0.5993 | 0.3251 | 0.5993 | 0.7741 |
| 0.4467 | 7.2727 | 560 | 0.5790 | 0.3708 | 0.5790 | 0.7609 |
| 0.4467 | 7.2987 | 562 | 0.5656 | 0.3708 | 0.5656 | 0.7521 |
| 0.4467 | 7.3247 | 564 | 0.5620 | 0.4394 | 0.5620 | 0.7497 |
| 0.4467 | 7.3506 | 566 | 0.5676 | 0.4394 | 0.5676 | 0.7534 |
| 0.4467 | 7.3766 | 568 | 0.5752 | 0.3900 | 0.5752 | 0.7584 |
| 0.4467 | 7.4026 | 570 | 0.5908 | 0.3771 | 0.5908 | 0.7686 |
| 0.4467 | 7.4286 | 572 | 0.6054 | 0.3771 | 0.6053 | 0.7780 |
| 0.4467 | 7.4545 | 574 | 0.6012 | 0.3771 | 0.6012 | 0.7754 |
| 0.4467 | 7.4805 | 576 | 0.5897 | 0.3771 | 0.5897 | 0.7679 |
| 0.4467 | 7.5065 | 578 | 0.5760 | 0.3396 | 0.5760 | 0.7589 |
| 0.4467 | 7.5325 | 580 | 0.5508 | 0.3467 | 0.5508 | 0.7421 |
| 0.4467 | 7.5584 | 582 | 0.5407 | 0.3467 | 0.5407 | 0.7353 |
| 0.4467 | 7.5844 | 584 | 0.5401 | 0.3467 | 0.5401 | 0.7349 |
| 0.4467 | 7.6104 | 586 | 0.5509 | 0.3900 | 0.5509 | 0.7422 |
| 0.4467 | 7.6364 | 588 | 0.5763 | 0.3396 | 0.5763 | 0.7591 |
| 0.4467 | 7.6623 | 590 | 0.6224 | 0.4597 | 0.6224 | 0.7890 |
| 0.4467 | 7.6883 | 592 | 0.6624 | 0.4597 | 0.6624 | 0.8139 |
| 0.4467 | 7.7143 | 594 | 0.6687 | 0.4597 | 0.6687 | 0.8178 |
| 0.4467 | 7.7403 | 596 | 0.6662 | 0.4597 | 0.6662 | 0.8162 |
| 0.4467 | 7.7662 | 598 | 0.6564 | 0.4597 | 0.6564 | 0.8102 |
| 0.4467 | 7.7922 | 600 | 0.6513 | 0.4597 | 0.6513 | 0.8070 |
| 0.4467 | 7.8182 | 602 | 0.6531 | 0.4597 | 0.6531 | 0.8081 |
| 0.4467 | 7.8442 | 604 | 0.6599 | 0.4597 | 0.6599 | 0.8124 |
| 0.4467 | 7.8701 | 606 | 0.6715 | 0.4597 | 0.6715 | 0.8195 |
| 0.4467 | 7.8961 | 608 | 0.6723 | 0.4597 | 0.6723 | 0.8199 |
| 0.4467 | 7.9221 | 610 | 0.6670 | 0.4286 | 0.6670 | 0.8167 |
| 0.4467 | 7.9481 | 612 | 0.6532 | 0.4286 | 0.6532 | 0.8082 |
| 0.4467 | 7.9740 | 614 | 0.6268 | 0.4394 | 0.6268 | 0.7917 |
| 0.4467 | 8.0 | 616 | 0.6176 | 0.4394 | 0.6176 | 0.7859 |
| 0.4467 | 8.0260 | 618 | 0.6097 | 0.4394 | 0.6097 | 0.7809 |
| 0.4467 | 8.0519 | 620 | 0.6026 | 0.4394 | 0.6026 | 0.7763 |
| 0.4467 | 8.0779 | 622 | 0.5958 | 0.4394 | 0.5958 | 0.7719 |
| 0.4467 | 8.1039 | 624 | 0.5842 | 0.4394 | 0.5842 | 0.7643 |
| 0.4467 | 8.1299 | 626 | 0.5755 | 0.3467 | 0.5755 | 0.7586 |
| 0.4467 | 8.1558 | 628 | 0.5778 | 0.3467 | 0.5778 | 0.7601 |
| 0.4467 | 8.1818 | 630 | 0.5901 | 0.4394 | 0.5901 | 0.7682 |
| 0.4467 | 8.2078 | 632 | 0.6055 | 0.4394 | 0.6055 | 0.7782 |
| 0.4467 | 8.2338 | 634 | 0.6183 | 0.4772 | 0.6183 | 0.7863 |
| 0.4467 | 8.2597 | 636 | 0.6306 | 0.4740 | 0.6306 | 0.7941 |
| 0.4467 | 8.2857 | 638 | 0.6535 | 0.4280 | 0.6535 | 0.8084 |
| 0.4467 | 8.3117 | 640 | 0.6753 | 0.5045 | 0.6753 | 0.8218 |
| 0.4467 | 8.3377 | 642 | 0.6766 | 0.4381 | 0.6766 | 0.8225 |
| 0.4467 | 8.3636 | 644 | 0.6779 | 0.4381 | 0.6779 | 0.8233 |
| 0.4467 | 8.3896 | 646 | 0.6733 | 0.4381 | 0.6733 | 0.8205 |
| 0.4467 | 8.4156 | 648 | 0.6676 | 0.3322 | 0.6676 | 0.8171 |
| 0.4467 | 8.4416 | 650 | 0.6581 | 0.3855 | 0.6581 | 0.8113 |
| 0.4467 | 8.4675 | 652 | 0.6441 | 0.3855 | 0.6441 | 0.8026 |
| 0.4467 | 8.4935 | 654 | 0.6414 | 0.3051 | 0.6414 | 0.8008 |
| 0.4467 | 8.5195 | 656 | 0.6451 | 0.3051 | 0.6451 | 0.8032 |
| 0.4467 | 8.5455 | 658 | 0.6545 | 0.3051 | 0.6545 | 0.8090 |
| 0.4467 | 8.5714 | 660 | 0.6649 | 0.3251 | 0.6649 | 0.8154 |
| 0.4467 | 8.5974 | 662 | 0.6690 | 0.3322 | 0.6690 | 0.8179 |
| 0.4467 | 8.6234 | 664 | 0.6789 | 0.3322 | 0.6789 | 0.8240 |
| 0.4467 | 8.6494 | 666 | 0.6794 | 0.3322 | 0.6794 | 0.8243 |
| 0.4467 | 8.6753 | 668 | 0.6769 | 0.2888 | 0.6769 | 0.8227 |
| 0.4467 | 8.7013 | 670 | 0.6717 | 0.2457 | 0.6717 | 0.8195 |
| 0.4467 | 8.7273 | 672 | 0.6705 | 0.2457 | 0.6705 | 0.8189 |
| 0.4467 | 8.7532 | 674 | 0.6687 | 0.2457 | 0.6687 | 0.8178 |
| 0.4467 | 8.7792 | 676 | 0.6696 | 0.1985 | 0.6696 | 0.8183 |
| 0.4467 | 8.8052 | 678 | 0.6657 | 0.1985 | 0.6657 | 0.8159 |
| 0.4467 | 8.8312 | 680 | 0.6608 | 0.1985 | 0.6608 | 0.8129 |
| 0.4467 | 8.8571 | 682 | 0.6635 | 0.1985 | 0.6635 | 0.8146 |
| 0.4467 | 8.8831 | 684 | 0.6655 | 0.1985 | 0.6655 | 0.8158 |
| 0.4467 | 8.9091 | 686 | 0.6727 | 0.1985 | 0.6727 | 0.8202 |
| 0.4467 | 8.9351 | 688 | 0.6769 | 0.2888 | 0.6769 | 0.8227 |
| 0.4467 | 8.9610 | 690 | 0.6739 | 0.2888 | 0.6739 | 0.8209 |
| 0.4467 | 8.9870 | 692 | 0.6705 | 0.1985 | 0.6705 | 0.8188 |
| 0.4467 | 9.0130 | 694 | 0.6760 | 0.1985 | 0.6760 | 0.8222 |
| 0.4467 | 9.0390 | 696 | 0.6796 | 0.1985 | 0.6796 | 0.8244 |
| 0.4467 | 9.0649 | 698 | 0.6779 | 0.1985 | 0.6779 | 0.8234 |
| 0.4467 | 9.0909 | 700 | 0.6776 | 0.2457 | 0.6776 | 0.8232 |
| 0.4467 | 9.1169 | 702 | 0.6777 | 0.2457 | 0.6777 | 0.8232 |
| 0.4467 | 9.1429 | 704 | 0.6745 | 0.2317 | 0.6745 | 0.8213 |
| 0.4467 | 9.1688 | 706 | 0.6713 | 0.2317 | 0.6713 | 0.8193 |
| 0.4467 | 9.1948 | 708 | 0.6697 | 0.2317 | 0.6697 | 0.8183 |
| 0.4467 | 9.2208 | 710 | 0.6713 | 0.2317 | 0.6713 | 0.8193 |
| 0.4467 | 9.2468 | 712 | 0.6751 | 0.2317 | 0.6751 | 0.8216 |
| 0.4467 | 9.2727 | 714 | 0.6773 | 0.2317 | 0.6773 | 0.8230 |
| 0.4467 | 9.2987 | 716 | 0.6815 | 0.2457 | 0.6815 | 0.8256 |
| 0.4467 | 9.3247 | 718 | 0.6834 | 0.2457 | 0.6834 | 0.8267 |
| 0.4467 | 9.3506 | 720 | 0.6813 | 0.3322 | 0.6813 | 0.8254 |
| 0.4467 | 9.3766 | 722 | 0.6762 | 0.3322 | 0.6762 | 0.8223 |
| 0.4467 | 9.4026 | 724 | 0.6726 | 0.3322 | 0.6726 | 0.8201 |
| 0.4467 | 9.4286 | 726 | 0.6703 | 0.2457 | 0.6703 | 0.8187 |
| 0.4467 | 9.4545 | 728 | 0.6676 | 0.2457 | 0.6676 | 0.8171 |
| 0.4467 | 9.4805 | 730 | 0.6608 | 0.2317 | 0.6608 | 0.8129 |
| 0.4467 | 9.5065 | 732 | 0.6563 | 0.2817 | 0.6563 | 0.8101 |
| 0.4467 | 9.5325 | 734 | 0.6522 | 0.2817 | 0.6522 | 0.8076 |
| 0.4467 | 9.5584 | 736 | 0.6473 | 0.2817 | 0.6473 | 0.8045 |
| 0.4467 | 9.5844 | 738 | 0.6416 | 0.2817 | 0.6416 | 0.8010 |
| 0.4467 | 9.6104 | 740 | 0.6378 | 0.2817 | 0.6378 | 0.7986 |
| 0.4467 | 9.6364 | 742 | 0.6342 | 0.2817 | 0.6342 | 0.7964 |
| 0.4467 | 9.6623 | 744 | 0.6330 | 0.2817 | 0.6330 | 0.7956 |
| 0.4467 | 9.6883 | 746 | 0.6328 | 0.2817 | 0.6328 | 0.7955 |
| 0.4467 | 9.7143 | 748 | 0.6321 | 0.2817 | 0.6321 | 0.7951 |
| 0.4467 | 9.7403 | 750 | 0.6329 | 0.2817 | 0.6329 | 0.7955 |
| 0.4467 | 9.7662 | 752 | 0.6341 | 0.2817 | 0.6341 | 0.7963 |
| 0.4467 | 9.7922 | 754 | 0.6357 | 0.2817 | 0.6357 | 0.7973 |
| 0.4467 | 9.8182 | 756 | 0.6370 | 0.2817 | 0.6370 | 0.7981 |
| 0.4467 | 9.8442 | 758 | 0.6389 | 0.2817 | 0.6389 | 0.7993 |
| 0.4467 | 9.8701 | 760 | 0.6406 | 0.2817 | 0.6406 | 0.8004 |
| 0.4467 | 9.8961 | 762 | 0.6422 | 0.2817 | 0.6422 | 0.8014 |
| 0.4467 | 9.9221 | 764 | 0.6428 | 0.2817 | 0.6428 | 0.8017 |
| 0.4467 | 9.9481 | 766 | 0.6432 | 0.2817 | 0.6432 | 0.8020 |
| 0.4467 | 9.9740 | 768 | 0.6433 | 0.2817 | 0.6433 | 0.8021 |
| 0.4467 | 10.0 | 770 | 0.6435 | 0.2817 | 0.6435 | 0.8022 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
Zoyd/bullerwins_Codestral-22B-v0.1-hf-4_25bpw_exl2 | Zoyd | "2024-05-29T22:33:13Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"code",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"exl2",
"region:us"
] | text-generation | "2024-05-29T21:10:38Z" | ---
inference: false
license: other
license_name: mnpl
license_link: https://mistral.ai/licences/MNPL-0.1.md
tags:
- code
language:
- code
---
**Exllamav2** quant (**exl2** / **4.25 bpw**) made with ExLlamaV2 v0.1.1
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/bullerwins_Codestral-22B-v0.1-hf-2_2bpw_exl2)**</center> | <center>6296 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/bullerwins_Codestral-22B-v0.1-hf-2_5bpw_exl2)**</center> | <center>7045 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/bullerwins_Codestral-22B-v0.1-hf-3_0bpw_exl2)**</center> | <center>8347 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/bullerwins_Codestral-22B-v0.1-hf-3_5bpw_exl2)**</center> | <center>9652 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/bullerwins_Codestral-22B-v0.1-hf-3_75bpw_exl2)**</center> | <center>10297 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/bullerwins_Codestral-22B-v0.1-hf-4_0bpw_exl2)**</center> | <center>10953 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/bullerwins_Codestral-22B-v0.1-hf-4_25bpw_exl2)**</center> | <center>11603 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/bullerwins_Codestral-22B-v0.1-hf-5_0bpw_exl2)**</center> | <center>13553 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/bullerwins_Codestral-22B-v0.1-hf-6_0bpw_exl2)**</center> | <center>16185 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/bullerwins_Codestral-22B-v0.1-hf-6_5bpw_exl2)**</center> | <center>17484 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/bullerwins_Codestral-22B-v0.1-hf-8_0bpw_exl2)**</center> | <center>19350 MB</center> | <center>8</center> |
Converted using [this](https://huggingface.co/bullerwins/Codestral-22B-v0.1-hf/blob/main/convert_mistral_weights_to_hf-22B.py) script
# Model Card for Codestral-22B-v0.1
Codestrall-22B-v0.1 is trained on a diverse dataset of 80+ programming languages, including the most popular ones, such as Python, Java, C, C++, JavaScript, and Bash (more details in the [Blogpost](https://mistral.ai/news/codestral/)). The model can be queried:
- As instruct, for instance to answer any questions about a code snippet (write documentation, explain, factorize) or to generate code following specific indications
- As Fill in the Middle (FIM), to predict the middle tokens between a prefix and a suffix (very useful for software development add-ons like in VS Code)
## Installation
It is recommended to use `mistralai/Codestral-22B-v0.1` with [mistral-inference](https://github.com/mistralai/mistral-inference).
```
pip install mistral_inference
```
## Download
```py
from huggingface_hub import snapshot_download
from pathlib import Path
mistral_models_path = Path.home().joinpath('mistral_models', 'Codestral-22B-v0.1')
mistral_models_path.mkdir(parents=True, exist_ok=True)
snapshot_download(repo_id="mistralai/Codestral-22B-v0.1", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path)
```
### Chat
After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment.
```
mistral-chat $HOME/mistral_models/Codestral-22B-v0.1 --instruct --max_tokens 256
```
Will generate an answer to "Write me a function that computes fibonacci in Rust" and should give something along the following lines:
```
Sure, here's a simple implementation of a function that computes the Fibonacci sequence in Rust. This function takes an integer `n` as an argument and returns the `n`th Fibonacci number.
fn fibonacci(n: u32) -> u32 {
match n {
0 => 0,
1 => 1,
_ => fibonacci(n - 1) + fibonacci(n - 2),
}
}
fn main() {
let n = 10;
println!("The {}th Fibonacci number is: {}", n, fibonacci(n));
}
This function uses recursion to calculate the Fibonacci number. However, it's not the most efficient solution because it performs a lot of redundant calculations. A more efficient solution would use a loop to iteratively calculate the Fibonacci numbers.
```
### Fill-in-the-middle (FIM)
After installing `mistral_inference` and running `pip install --upgrade mistral_common` to make sure to have mistral_common>=1.2 installed:
```py
from mistral_inference.model import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.tokens.instruct.request import FIMRequest
tokenizer = MistralTokenizer.v3()
model = Transformer.from_folder("~/codestral-22B-240529")
prefix = """def add("""
suffix = """ return sum"""
request = FIMRequest(prompt=prefix, suffix=suffix)
tokens = tokenizer.encode_fim(request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=256, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.decode(out_tokens[0])
middle = result.split(suffix)[0].strip()
print(middle)
```
Should give something along the following lines:
```
num1, num2):
# Add two numbers
sum = num1 + num2
# return the sum
```
## Limitations
The Codestral-22B-v0.1 does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
## License
Codestral-22B-v0.1 is released under the `MNLP-0.1` license.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Henri Roussez, Jean-Malo Delignon, Jia Li, Justus Murke, Kartik Khandelwal, Lawrence Stewart, Louis Martin, Louis Ternon, Lucile Saulnier, LΓ©lio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Marjorie Janiewicz, Mickael Seznec, Nicolas Schuhl, Patrick von Platen, Romain Sauvestre, Pierre Stock, Sandeep Subramanian, Saurabh Garg, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Thibault Schueller, TimothΓ©e Lacroix, ThΓ©ophile Gervet, Thomas Wang, Valera Nemychnikova, Wendy Shang, William El Sayed, William Marshall |
CultriX/Qwen2.5-14B-MegaMerge-pt2 | CultriX | "2024-11-14T13:50:10Z" | 49 | 2 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"en",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:CultriX/Qwen2.5-14B-MergeStock",
"base_model:merge:CultriX/Qwen2.5-14B-MergeStock",
"base_model:CultriX/Qwen2.5-14B-Wernicke",
"base_model:merge:CultriX/Qwen2.5-14B-Wernicke",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-10-24T23:45:28Z" | ---
base_model:
- CultriX/Qwen2.5-14B-MegaMerge-pt1
- CultriX/Qwen2.5-14B-Wernicke
- CultriX/Qwen2.5-14B-MergeStock
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
language:
- en
model-index:
- name: Qwen2.5-14B-MegaMerge-pt2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 52.35
name: strict accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=CultriX/Qwen2.5-14B-MegaMerge-pt2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 50.64
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=CultriX/Qwen2.5-14B-MegaMerge-pt2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 30.06
name: exact match
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=CultriX/Qwen2.5-14B-MegaMerge-pt2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 19.13
name: acc_norm
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=CultriX/Qwen2.5-14B-MegaMerge-pt2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 18.25
name: acc_norm
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=CultriX/Qwen2.5-14B-MegaMerge-pt2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 49.15
name: accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=CultriX/Qwen2.5-14B-MegaMerge-pt2
name: Open LLM Leaderboard
metrics:
- accuracy
pipeline_tag: text-generation
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [CultriX/Qwen2.5-14B-MegaMerge-pt1](https://huggingface.co/CultriX/Qwen2.5-14B-MegaMerge-pt1) as a base.
### Models Merged
The following models were included in the merge:
* [CultriX/Qwen2.5-14B-Wernicke](https://huggingface.co/CultriX/Qwen2.5-14B-Wernicke)
* [CultriX/Qwen2.5-14B-MergeStock](https://huggingface.co/CultriX/Qwen2.5-14B-MergeStock)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
# final_dare_ties_merge.yaml
models:
- model: CultriX/Qwen2.5-14B-MergeStock
parameters:
density: 0.5 # Retain 50% of the most significant parameters
weight: 0.6 # Emphasize MergeStock's contributions
- model: CultriX/Qwen2.5-14B-Wernicke
parameters:
density: 0.5 # Retain 50% of the most significant parameters
weight: 0.4 # Incorporate Wernicke's contributions
merge_method: dare_ties
base_model: CultriX/Qwen2.5-14B-MegaMerge-pt1
parameters:
normalize: true
int8_mask: true
dtype: bfloat16
tokenizer_source: Qwen/Qwen2.5-14B-Instruct
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_CultriX__Qwen2.5-14B-MegaMerge-pt2)
| Metric | Value |
|------------------- |------:|
| Avg. | 36.69 |
| IFEval (0-Shot) | 56.83 |
| BBH (3-Shot) | 50.91 |
| MATH Lvl 5 (4-Shot)| 27.34 |
| GPQA (0-shot) | 17.23 |
| MuSR (0-shot) | 18.74 |
| MMLU-PRO (5-shot) | 49.12 | |
EchoStreet/mpt-7b | EchoStreet | "2023-07-31T14:21:07Z" | 10 | 2 | transformers | [
"transformers",
"pytorch",
"mpt",
"text-generation",
"Composer",
"MosaicML",
"llm-foundry",
"StreamingDatasets",
"custom_code",
"dataset:mc4",
"dataset:c4",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:bigcode/the-stack",
"dataset:allenai/s2orc",
"arxiv:2108.12409",
"arxiv:2302.13971",
"arxiv:2205.14135",
"arxiv:2010.04245",
"arxiv:1909.08053",
"arxiv:2302.06675",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-07-27T21:16:27Z" | ---
license: apache-2.0
tags:
- Composer
- MosaicML
- llm-foundry
- StreamingDatasets
datasets:
- mc4
- c4
- togethercomputer/RedPajama-Data-1T
- bigcode/the-stack
- allenai/s2orc
inference: false
duplicated_from: mosaicml/mpt-7b
---
# MPT-7B
MPT-7B is a decoder-style transformer pretrained from scratch on 1T tokens of English text and code.
This model was trained by [MosaicML](https://www.mosaicml.com).
MPT-7B is part of the family of MosaicPretrainedTransformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference.
These architectural changes include performance-optimized layer implementations and the elimination of context length limits by replacing
positional embeddings with Attention with Linear Biases ([ALiBi](https://arxiv.org/abs/2108.12409)).
Thanks to these modifications, MPT models can be trained with high throughput efficiency and stable convergence.
MPT models can also be served efficiently with both standard HuggingFace pipelines and NVIDIA's [FasterTransformer](https://github.com/NVIDIA/FasterTransformer).
This model uses the MosaicML LLM codebase, which can be found in the [llm-foundry repository](https://github.com/mosaicml/llm-foundry). It was trained by MosaicMLβs NLP team on the [MosaicML platform](https://www.mosaicml.com/training) for LLM pretraining, finetuning, and inference.
### How is this model different?
MPT-7B is
* **Licensed for the possibility of commercial use** (unlike [LLaMA](https://arxiv.org/abs/2302.13971)).
* **Trained on a large amount of data** (1T tokens like [LLaMA](https://arxiv.org/abs/2302.13971) vs. 300B for [Pythia](https://github.com/EleutherAI/pythia), 300B for [OpenLLaMA](https://github.com/openlm-research/open_llama), and 800B for [StableLM](https://github.com/Stability-AI/StableLM)).
* **Prepared to handle extremely long inputs** thanks to [ALiBi](https://arxiv.org/abs/2108.12409) (we finetuned [MPT-7B-StoryWriter-65k+](https://huggingface.co/mosaicml/mpt-7b-storywriter) on up to 65k inputs and can handle up to 84k vs. 2k-4k for other open source models).
* **Capable of fast training and inference** (via [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) and [FasterTransformer](https://github.com/NVIDIA/FasterTransformer))
* **Equipped with highly efficient open-source training code** via the [llm-foundry repository](https://github.com/mosaicml/llm-foundry)
### Models finetuned off MPT-7B:
The following models are finetuned on MPT-7B:
* [MPT-7B-StoryWriter-65k+](https://huggingface.co/mosaicml/mpt-7b-storywriter): a model designed to read and write fictional stories with super long context lengths.
Built by finetuning MPT-7B with a context length of 65k tokens on a filtered fiction subset of the [books3 dataset](https://huggingface.co/datasets/the_pile_books3).
At inference time, thanks to [ALiBi](https://arxiv.org/abs/2108.12409), MPT-7B-StoryWriter-65k+ can extrapolate even beyond 65k tokens.
We demonstrate generations as long as 80k tokens on a single A100-80GB GPU in our [blogpost](www.mosaicml.com/blog/mpt-7b).
* License: Apache 2.0
* [MPT-7B-Instruct](https://huggingface.co/mosaicml/mpt-7b-instruct): a model for short-form instruction following.
Built by finetuning MPT-7B on a [dataset](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) we also release, derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets.
* License: _CC-By-SA-3.0_
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-instruct)
* [MPT-7B-Chat](https://huggingface.co/mosaicml/mpt-7b-chat): a chatbot-like model for dialogue generation.
Built by finetuning MPT-7B on the [ShareGPT-Vicuna](https://huggingface.co/datasets/jeffwan/sharegpt_vicuna), [HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3),
[Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), [HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), and [Evol-Instruct](https://huggingface.co/datasets/victor123/evol_instruct_70k) datasets.
* License: _CC-By-NC-SA-4.0_
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-chat)
## Model Date
May 5, 2023
## Model License
Apache-2.0
## Documentation
* [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
## How to Use
This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b',
trust_remote_code=True
)
```
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-7b'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton'
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-7b'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 4096 # (input + output) tokens can now be up to 4096
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('EleutherAI/gpt-neox-20b')
```
The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
```python
from transformers import pipeline
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe('Here is a recipe for vegan banana bread:\n',
max_new_tokens=100,
do_sample=True,
use_cache=True))
```
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 6.7B |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 50432 |
| sequence length | 2048 |
## Training Data
### Streaming Datasets
Data was formatted using the MosaicML [StreamingDataset](https://github.com/mosaicml/streaming) library to host our data in object storage and efficiently stream it to our compute cluster during training.
StreamingDataset obviates the need to download the whole dataset before starting training, and allows instant resumption of training from any point in the dataset.
### Data Mix
The model was trained for 1T tokens (with batch size 1760 and sequence length 2048). It was trained on the following data mix:
| Data Source | Number of Tokens in Source | Proportion | Effective Number of Tokens | Epochs |
|-------------|----------------------------|------------|----------------------------|--------|
| mC4 3.1.0 - English | 417.99 B | 0.33 | 330 B | 0.14 |
| C4 - English - SemDedup 80% | 100.42 B | 0.299 | 299 B | 2.98 |
| RedPajama - CommonCrawl | 878.45 B | 0.1 | 100 B | 0.11 |
| The Stack - Selected Languages | 463.78 B | 0.1 | 100 B | 0.22 |
| RedPajama - Wikipedia - En | 4.87 B | 0.04 | 40 B | 8.21 |
| The Stack - Markdown | 107.07 B | 0.035 | 35 B | 0.33 |
| S2ORC | 48.85 B | 0.033 | 33 B | 0.68 |
| RedPajama - Books | 26.02 B | 0.03 | 30B | 1.15 |
| RedPajama - arXiv | 28.10 B | 0.019 | 19 B | 0.68 |
| RedPajama - StackExchange | 20.54 B | 0.014 | 14 B |0.68 |
Samples for each batch were selected from one of the datasets with the probability specified above.
The examples were shuffled within each dataset, and each example was constructed from as many sequences from that dataset as were necessary to fill the 2048 sequence length.
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. This BPE tokenizer has a number of desirable characteristics,
most of which are relevant for tokenizing code:
(1) It was trained on a diverse mix of data that includes code (The Pile)
(2) It applies consistent space delimitation, unlike the GPT2 tokenizer which tokenizes inconsistently depending on the presence of prefix spaces
(3) It contains tokens for repeated space characters, which allows superior compression of text with large amounts of repeated space characters.
The model vocabulary size of 50432 was set to be a multiple of 128 (as in [MEGATRON-LM](https://arxiv.org/abs/1909.08053)), model flop utilization (MFU) increased by up to four percentage points.
### Training Configuration
This model was trained on 440 A100-40GBs for about 9.5 days using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-7B (Base) is **not** intended for deployment without finetuning.
It should not be used for human-facing interactions without further guardrails and user consent.
MPT-7B can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-7B was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-7B: A New Standard for Open-Source,
Commercially Usable LLMs},
year = {2023},
url = {www.mosaicml.com/blog/mpt-7b},
note = {Accessed: 2023-05-05},
urldate = {2023-05-05}
}
```
|
abdesedd/llama_gelora_4_0.0001 | abdesedd | "2025-03-29T21:32:05Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"license:llama3.1",
"region:us"
] | null | "2025-03-29T21:23:12Z" | ---
base_model: meta-llama/Llama-3.1-8B
library_name: peft
license: llama3.1
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: llama_gelora_4_0.0001
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama_gelora_4_0.0001
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.12.0
- Transformers 4.45.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.20.3 |
huzaifa1117/speecht5_tts_voxpopuli_ur | huzaifa1117 | "2024-05-31T19:51:49Z" | 78 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"nl",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | "2024-05-30T19:31:37Z" | ---
language:
- nl
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS urdu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS urdu
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the mine dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4670
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5323 | 1.6694 | 1000 | 0.4864 |
| 0.5074 | 3.3389 | 2000 | 0.4734 |
| 0.5006 | 5.0083 | 3000 | 0.4685 |
| 0.5072 | 6.6778 | 4000 | 0.4670 |
### Framework versions
- Transformers 4.42.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
kyungmin011029/code4 | kyungmin011029 | "2023-08-21T06:04:28Z" | 63 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:klue/bert-base",
"base_model:finetune:klue/bert-base",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-08-21T06:04:16Z" | ---
license: cc-by-sa-4.0
base_model: klue/bert-base
tags:
- generated_from_keras_callback
model-index:
- name: code4
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# code4
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
ahishamm/vit-base-isic-sharpened-patch-16 | ahishamm | "2023-06-29T10:39:18Z" | 222 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-06-29T10:34:24Z" | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-base-isic-sharpened-patch-16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-isic-sharpened-patch-16
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the ahishamm/isic_sharpened_db dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6163
- Accuracy: 0.7639
- Recall: 0.7639
- F1: 0.7639
- Precision: 0.7639
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Rooney88/bio_llama-2_13b | Rooney88 | "2024-02-14T19:39:29Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-02-14T17:09:52Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Mimo77/ppo-LunarLander-v2 | Mimo77 | "2023-11-27T13:29:49Z" | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-11-27T13:29:24Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 253.47 +/- 24.26
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
joortif/P5_blindness_resnet34 | joortif | "2025-03-01T13:05:36Z" | 0 | 0 | fastai | [
"fastai",
"region:us"
] | null | "2025-03-01T13:05:22Z" | ---
tags:
- fastai
---
# Amazing!
π₯³ Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using π€ Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner π€! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
iTroned/bert_90_masked_hs_test | iTroned | "2025-04-01T14:16:40Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-04-01T13:58:20Z" | Temporary Redirect. Redirecting to /api/resolve-cache/models/iTroned/bert_90_masked_hs_test/c0f45e7eeb44a6fb62d4f59445711c8f85b1e9b1/README.md?%2FiTroned%2Fbert_90_masked_hs_test%2Fresolve%2Fmain%2FREADME.md=&etag=%222d3a2386e06893c48015e4bbfc5ac2fffad5078d%22 |
sherlzanora/emotion_multilabel_indobert | sherlzanora | "2025-04-09T15:50:58Z" | 0 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-04-09T15:50:36Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
lesso02/c6d713ac-d60c-4d32-b327-6067ff212db6 | lesso02 | "2025-03-23T19:50:42Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/codegemma-7b-it",
"base_model:adapter:unsloth/codegemma-7b-it",
"license:apache-2.0",
"region:us"
] | null | "2025-03-23T17:56:50Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/codegemma-7b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c6d713ac-d60c-4d32-b327-6067ff212db6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/codegemma-7b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c16da3984b34bc14_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c16da3984b34bc14_train_data.json
type:
field_instruction: topic
field_output: argument
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso02/c6d713ac-d60c-4d32-b327-6067ff212db6
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000202
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/c16da3984b34bc14_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 20
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 884e226b-1988-4d7f-ae9d-7cde80732a8a
wandb_project: 02a
wandb_run: your_name
wandb_runid: 884e226b-1988-4d7f-ae9d-7cde80732a8a
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c6d713ac-d60c-4d32-b327-6067ff212db6
This model is a fine-tuned version of [unsloth/codegemma-7b-it](https://huggingface.co/unsloth/codegemma-7b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4206
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000202
- train_batch_size: 4
- eval_batch_size: 4
- seed: 20
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0011 | 1 | 4.4484 |
| 2.4336 | 0.5671 | 500 | 2.4206 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
maviced/practica3 | maviced | "2024-04-22T20:26:07Z" | 0 | 0 | fastai | [
"fastai",
"region:us"
] | null | "2024-04-14T10:40:44Z" | ---
tags:
- fastai
---
# Amazing!
π₯³ Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using π€ Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner π€! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
tuanna08go/148c62d9-e164-609d-1e6b-0a5f2bf36898 | tuanna08go | "2025-01-12T05:10:48Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Hermes-2-Pro-Llama-3-8B",
"base_model:adapter:NousResearch/Hermes-2-Pro-Llama-3-8B",
"license:llama3",
"region:us"
] | null | "2025-01-12T03:02:05Z" | ---
library_name: peft
license: llama3
base_model: NousResearch/Hermes-2-Pro-Llama-3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 148c62d9-e164-609d-1e6b-0a5f2bf36898
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Hermes-2-Pro-Llama-3-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 4f040e85e93f95cf_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4f040e85e93f95cf_train_data.json
type:
field_input: timestamp
field_instruction: url
field_output: text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: tuanna08go/148c62d9-e164-609d-1e6b-0a5f2bf36898
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/4f040e85e93f95cf_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4406cffd-e6e5-4159-b744-ee300b8e550d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4406cffd-e6e5-4159-b744-ee300b8e550d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 148c62d9-e164-609d-1e6b-0a5f2bf36898
This model is a fine-tuned version of [NousResearch/Hermes-2-Pro-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 2.2851 |
| 2.3215 | 0.0007 | 10 | 2.2082 |
| 2.1787 | 0.0013 | 20 | 2.1010 |
| 2.2362 | 0.0020 | 30 | 2.0700 |
| 2.0298 | 0.0026 | 40 | 2.0608 |
| 1.9832 | 0.0033 | 50 | 2.0586 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
vanillaOVO/WizardCoder-Python-13B-V1.0 | vanillaOVO | "2024-06-20T10:40:28Z" | 9 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"code",
"arxiv:2304.12244",
"arxiv:2306.08568",
"arxiv:2308.09583",
"arxiv:2303.08774",
"license:llama2",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-20T03:18:02Z" | ---
license: llama2
metrics:
- code_eval
library_name: transformers
tags:
- code
model-index:
- name: WizardCoder-Python-13B-V1.0
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 0.64
verified: false
---
## Note
This is a replica of the official repository, intended solely for research purposes to replicate results. If there are any copyright issues, please contact me.
<p align="center">
π€ <a href="https://huggingface.co/WizardLM" target="_blank">HF Repo</a> β’π± <a href="https://github.com/nlpxucan/WizardLM" target="_blank">Github Repo</a> β’ π¦ <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> β’ π <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> β’ π <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> β’ π <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br>
</p>
<p align="center">
π Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a>
</p>
## News
- π₯π₯π₯[2023/08/26] We released **WizardCoder-Python-34B-V1.0** , which achieves the **73.2 pass@1** and surpasses **GPT4 (2023/03/15)**, **ChatGPT-3.5**, and **Claude2** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2023/06/16] We released **WizardCoder-15B-V1.0** , which achieves the **57.3 pass@1** and surpasses **Claude-Plus (+6.8)**, **Bard (+15.3)** and **InstructCodeT5+ (+22.3)** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
βNote: There are two HumanEval results of GPT4 and ChatGPT-3.5. The 67.0 and 48.1 are reported by the official GPT4 Report (2023/03/15) of [OpenAI](https://arxiv.org/abs/2303.08774). The 82.0 and 72.5 are tested by ourselves with the latest API (2023/08/26).
| Model | Checkpoint | Paper | HumanEval | MBPP | Demo | License |
| ----- |------| ---- |------|-------| ----- | ----- |
| WizardCoder-Python-34B-V1.0 | π€ <a href="https://huggingface.co/WizardLM/WizardCoder-Python-34B-V1.0" target="_blank">HF Link</a> | π <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 73.2 | 61.2 | [Demo](http://47.103.63.15:50085/) | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
| WizardCoder-15B-V1.0 | π€ <a href="https://huggingface.co/WizardLM/WizardCoder-15B-V1.0" target="_blank">HF Link</a> | π <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 59.8 |50.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
| WizardCoder-Python-13B-V1.0 | π€ <a href="https://huggingface.co/WizardLM/WizardCoder-Python-13B-V1.0" target="_blank">HF Link</a> | π <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 64.0 | 55.6 | -- | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
| WizardCoder-Python-7B-V1.0 | π€ <a href="https://huggingface.co/WizardLM/WizardCoder-Python-7B-V1.0" target="_blank">HF Link</a> | π <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 55.5 | 51.6 | [Demo](http://47.103.63.15:50088/) | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
| WizardCoder-3B-V1.0 | π€ <a href="https://huggingface.co/WizardLM/WizardCoder-3B-V1.0" target="_blank">HF Link</a> | π <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 34.8 |37.4 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
| WizardCoder-1B-V1.0 | π€ <a href="https://huggingface.co/WizardLM/WizardCoder-1B-V1.0" target="_blank">HF Link</a> | π <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 23.8 |28.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
- Our **WizardMath-70B-V1.0** model slightly outperforms some closed-source LLMs on the GSM8K, including **ChatGPT 3.5**, **Claude Instant 1** and **PaLM 2 540B**.
- Our **WizardMath-70B-V1.0** model achieves **81.6 pass@1** on the [GSM8k Benchmarks](https://github.com/openai/grade-school-math), which is **24.8** points higher than the SOTA open-source LLM, and achieves **22.7 pass@1** on the [MATH Benchmarks](https://github.com/hendrycks/math), which is **9.2** points higher than the SOTA open-source LLM.
<font size=4>
| Model | Checkpoint | Paper | GSM8k | MATH |Online Demo| License|
| ----- |------| ---- |------|-------| ----- | ----- |
| WizardMath-70B-V1.0 | π€ <a href="https://huggingface.co/WizardLM/WizardMath-70B-V1.0" target="_blank">HF Link</a> | π <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **81.6** | **22.7** |[Demo](http://47.103.63.15:50083/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> |
| WizardMath-13B-V1.0 | π€ <a href="https://huggingface.co/WizardLM/WizardMath-13B-V1.0" target="_blank">HF Link</a> | π <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **63.9** | **14.0** |[Demo](http://47.103.63.15:50082/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> |
| WizardMath-7B-V1.0 | π€ <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.0" target="_blank">HF Link</a> | π <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **54.9** | **10.7** | [Demo ](http://47.103.63.15:50080/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a>|
</font>
- [08/09/2023] We released **WizardLM-70B-V1.0** model. Here is [Full Model Weight](https://huggingface.co/WizardLM/WizardLM-70B-V1.0).
<font size=4>
| <sup>Model</sup> | <sup>Checkpoint</sup> | <sup>Paper</sup> |<sup>MT-Bench</sup> | <sup>AlpacaEval</sup> | <sup>GSM8k</sup> | <sup>HumanEval</sup> | <sup>License</sup>|
| ----- |------| ---- |------|-------| ----- | ----- | ----- |
| <sup>**WizardLM-70B-V1.0**</sup> | <sup>π€ <a href="https://huggingface.co/WizardLM/WizardLM-70B-V1.0" target="_blank">HF Link</a> </sup>|<sup>π**Coming Soon**</sup>| <sup>**7.78**</sup> | <sup>**92.91%**</sup> |<sup>**77.6%**</sup> | <sup> **50.6**</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> |
| <sup>WizardLM-13B-V1.2</sup> | <sup>π€ <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.2" target="_blank">HF Link</a> </sup>| | <sup>7.06</sup> | <sup>89.17%</sup> |<sup>55.3%</sup> | <sup>36.6 </sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> |
| <sup>WizardLM-13B-V1.1</sup> |<sup> π€ <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.1" target="_blank">HF Link</a> </sup> | | <sup>6.76</sup> |<sup>86.32%</sup> | | <sup>25.0 </sup>| <sup>Non-commercial</sup>|
| <sup>WizardLM-30B-V1.0</sup> | <sup>π€ <a href="https://huggingface.co/WizardLM/WizardLM-30B-V1.0" target="_blank">HF Link</a></sup> | | <sup>7.01</sup> | | | <sup>37.8 </sup>| <sup>Non-commercial</sup> |
| <sup>WizardLM-13B-V1.0</sup> | <sup>π€ <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.0" target="_blank">HF Link</a> </sup> | | <sup>6.35</sup> | <sup>75.31%</sup> | | <sup> 24.0 </sup> | <sup>Non-commercial</sup>|
| <sup>WizardLM-7B-V1.0 </sup>| <sup>π€ <a href="https://huggingface.co/WizardLM/WizardLM-7B-V1.0" target="_blank">HF Link</a> </sup> |<sup> π <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> </sup>| | | |<sup>19.1 </sup>|<sup> Non-commercial</sup>|
</font>
## Comparing WizardCoder-Python-34B-V1.0 with Other LLMs.
π₯ The following figure shows that our **WizardCoder-Python-34B-V1.0 attains the second position in this benchmark**, surpassing GPT4 (2023/03/15, 73.2 vs. 67.0), ChatGPT-3.5 (73.2 vs. 72.5) and Claude2 (73.2 vs. 71.2).
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/nlpxucan/WizardLM/main/WizardCoder/imgs/compare_sota.png" alt="WizardCoder" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
## Prompt Format
```
"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:"
```
## Inference Demo Script
We provide the inference demo code [here](https://github.com/nlpxucan/WizardLM/tree/main/demo).
Note: This script supports `WizardLM/WizardCoder-Python-34B/13B/7B-V1.0`. If you want to inference with `WizardLM/WizardCoder-15B/3B/1B-V1.0`, please change the `stop_tokens = ['</s>']` to `stop_tokens = ['<|endoftext|>']` in the script.
## Citation
Please cite the repo if you use the data, method or code in this repo.
```
@article{luo2023wizardcoder,
title={WizardCoder: Empowering Code Large Language Models with Evol-Instruct},
author={Luo, Ziyang and Xu, Can and Zhao, Pu and Sun, Qingfeng and Geng, Xiubo and Hu, Wenxiang and Tao, Chongyang and Ma, Jing and Lin, Qingwei and Jiang, Daxin},
journal={arXiv preprint arXiv:2306.08568},
year={2023}
}
``` |
xieqilenb/fashion-mnist-tiny | xieqilenb | "2025-03-19T13:13:43Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2025-03-19T13:13:26Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Mirapape/icon-lora-v1 | Mirapape | "2025-01-29T17:05:59Z" | 41 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-01-29T16:52:16Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ICON
---
# Icon Lora V1
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ICON` to trigger the image generation.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Mirapape/icon-lora-v1', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
tmnam20/xlm-roberta-large-mrpc-1 | tmnam20 | "2024-01-18T06:48:34Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-01-18T06:45:23Z" | ---
language:
- en
license: mit
base_model: xlm-roberta-large
tags:
- generated_from_trainer
datasets:
- tmnam20/VieGLUE
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-large-mrpc-1
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tmnam20/VieGLUE/MRPC
type: tmnam20/VieGLUE
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8676470588235294
- name: F1
type: f1
value: 0.9028776978417268
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large-mrpc-1
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the tmnam20/VieGLUE/MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3261
- Accuracy: 0.8676
- F1: 0.9029
- Combined Score: 0.8853
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.36.0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
RichardErkhov/braindao_-_iq-code-evmind-0.5b-instruct-v0.2411.7-gguf | RichardErkhov | "2025-03-13T15:24:48Z" | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-13T15:17:52Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
iq-code-evmind-0.5b-instruct-v0.2411.7 - GGUF
- Model creator: https://huggingface.co/braindao/
- Original model: https://huggingface.co/braindao/iq-code-evmind-0.5b-instruct-v0.2411.7/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [iq-code-evmind-0.5b-instruct-v0.2411.7.Q2_K.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-0.5b-instruct-v0.2411.7-gguf/blob/main/iq-code-evmind-0.5b-instruct-v0.2411.7.Q2_K.gguf) | Q2_K | 0.32GB |
| [iq-code-evmind-0.5b-instruct-v0.2411.7.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-0.5b-instruct-v0.2411.7-gguf/blob/main/iq-code-evmind-0.5b-instruct-v0.2411.7.IQ3_XS.gguf) | IQ3_XS | 0.32GB |
| [iq-code-evmind-0.5b-instruct-v0.2411.7.IQ3_S.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-0.5b-instruct-v0.2411.7-gguf/blob/main/iq-code-evmind-0.5b-instruct-v0.2411.7.IQ3_S.gguf) | IQ3_S | 0.32GB |
| [iq-code-evmind-0.5b-instruct-v0.2411.7.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-0.5b-instruct-v0.2411.7-gguf/blob/main/iq-code-evmind-0.5b-instruct-v0.2411.7.Q3_K_S.gguf) | Q3_K_S | 0.32GB |
| [iq-code-evmind-0.5b-instruct-v0.2411.7.IQ3_M.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-0.5b-instruct-v0.2411.7-gguf/blob/main/iq-code-evmind-0.5b-instruct-v0.2411.7.IQ3_M.gguf) | IQ3_M | 0.32GB |
| [iq-code-evmind-0.5b-instruct-v0.2411.7.Q3_K.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-0.5b-instruct-v0.2411.7-gguf/blob/main/iq-code-evmind-0.5b-instruct-v0.2411.7.Q3_K.gguf) | Q3_K | 0.33GB |
| [iq-code-evmind-0.5b-instruct-v0.2411.7.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-0.5b-instruct-v0.2411.7-gguf/blob/main/iq-code-evmind-0.5b-instruct-v0.2411.7.Q3_K_M.gguf) | Q3_K_M | 0.33GB |
| [iq-code-evmind-0.5b-instruct-v0.2411.7.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-0.5b-instruct-v0.2411.7-gguf/blob/main/iq-code-evmind-0.5b-instruct-v0.2411.7.Q3_K_L.gguf) | Q3_K_L | 0.34GB |
| [iq-code-evmind-0.5b-instruct-v0.2411.7.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-0.5b-instruct-v0.2411.7-gguf/blob/main/iq-code-evmind-0.5b-instruct-v0.2411.7.IQ4_XS.gguf) | IQ4_XS | 0.33GB |
| [iq-code-evmind-0.5b-instruct-v0.2411.7.Q4_0.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-0.5b-instruct-v0.2411.7-gguf/blob/main/iq-code-evmind-0.5b-instruct-v0.2411.7.Q4_0.gguf) | Q4_0 | 0.33GB |
| [iq-code-evmind-0.5b-instruct-v0.2411.7.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-0.5b-instruct-v0.2411.7-gguf/blob/main/iq-code-evmind-0.5b-instruct-v0.2411.7.IQ4_NL.gguf) | IQ4_NL | 0.33GB |
| [iq-code-evmind-0.5b-instruct-v0.2411.7.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-0.5b-instruct-v0.2411.7-gguf/blob/main/iq-code-evmind-0.5b-instruct-v0.2411.7.Q4_K_S.gguf) | Q4_K_S | 0.36GB |
| [iq-code-evmind-0.5b-instruct-v0.2411.7.Q4_K.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-0.5b-instruct-v0.2411.7-gguf/blob/main/iq-code-evmind-0.5b-instruct-v0.2411.7.Q4_K.gguf) | Q4_K | 0.37GB |
| [iq-code-evmind-0.5b-instruct-v0.2411.7.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-0.5b-instruct-v0.2411.7-gguf/blob/main/iq-code-evmind-0.5b-instruct-v0.2411.7.Q4_K_M.gguf) | Q4_K_M | 0.37GB |
| [iq-code-evmind-0.5b-instruct-v0.2411.7.Q4_1.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-0.5b-instruct-v0.2411.7-gguf/blob/main/iq-code-evmind-0.5b-instruct-v0.2411.7.Q4_1.gguf) | Q4_1 | 0.35GB |
| [iq-code-evmind-0.5b-instruct-v0.2411.7.Q5_0.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-0.5b-instruct-v0.2411.7-gguf/blob/main/iq-code-evmind-0.5b-instruct-v0.2411.7.Q5_0.gguf) | Q5_0 | 0.37GB |
| [iq-code-evmind-0.5b-instruct-v0.2411.7.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-0.5b-instruct-v0.2411.7-gguf/blob/main/iq-code-evmind-0.5b-instruct-v0.2411.7.Q5_K_S.gguf) | Q5_K_S | 0.38GB |
| [iq-code-evmind-0.5b-instruct-v0.2411.7.Q5_K.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-0.5b-instruct-v0.2411.7-gguf/blob/main/iq-code-evmind-0.5b-instruct-v0.2411.7.Q5_K.gguf) | Q5_K | 0.39GB |
| [iq-code-evmind-0.5b-instruct-v0.2411.7.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-0.5b-instruct-v0.2411.7-gguf/blob/main/iq-code-evmind-0.5b-instruct-v0.2411.7.Q5_K_M.gguf) | Q5_K_M | 0.39GB |
| [iq-code-evmind-0.5b-instruct-v0.2411.7.Q5_1.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-0.5b-instruct-v0.2411.7-gguf/blob/main/iq-code-evmind-0.5b-instruct-v0.2411.7.Q5_1.gguf) | Q5_1 | 0.39GB |
| [iq-code-evmind-0.5b-instruct-v0.2411.7.Q6_K.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-0.5b-instruct-v0.2411.7-gguf/blob/main/iq-code-evmind-0.5b-instruct-v0.2411.7.Q6_K.gguf) | Q6_K | 0.47GB |
| [iq-code-evmind-0.5b-instruct-v0.2411.7.Q8_0.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-0.5b-instruct-v0.2411.7-gguf/blob/main/iq-code-evmind-0.5b-instruct-v0.2411.7.Q8_0.gguf) | Q8_0 | 0.49GB |
Original model description:
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits