modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-13 12:28:20
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 518
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-13 12:26:25
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Olimjan/uzbek_stt_v2 | Olimjan | 2025-05-26T10:02:31Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-21T14:11:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MaterialsInformaticsLaboratory/QA-SciBERT | MaterialsInformaticsLaboratory | 2025-05-26T10:01:45Z | 0 | 0 | null | [
"pytorch",
"tensorboard",
"bert",
"generated_from_trainer",
"dataset:squad_v2",
"region:us"
]
| null | 2025-05-26T10:00:43Z | ---
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: scibert-lr3-seed12-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# scibert-lr3-seed12-v2
This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on the squad_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu116
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Nerva1228/youbi01 | Nerva1228 | 2025-05-26T10:01:16Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-26T10:01:15Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: youbi
---
# Youbi01
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `youbi` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "youbi",
"lora_weights": "https://huggingface.co/Nerva1228/youbi01/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Nerva1228/youbi01', weight_name='lora.safetensors')
image = pipeline('youbi').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 5e-05
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Nerva1228/youbi01/discussions) to add images that show off what youβve made with this LoRA.
|
MaterialsInformaticsLaboratory/QA-MatSciBERT | MaterialsInformaticsLaboratory | 2025-05-26T09:59:47Z | 0 | 0 | null | [
"pytorch",
"tensorboard",
"bert",
"generated_from_trainer",
"dataset:squad_v2",
"region:us"
]
| null | 2025-05-26T09:58:34Z | ---
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: matscibert-lr3-seed12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# matscibert-lr3-seed12
This model is a fine-tuned version of [m3rg-iitd/matscibert](https://huggingface.co/m3rg-iitd/matscibert) on the squad_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu116
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Mekuu/LLAMA3.1-8b-Counsel-v1.5 | Mekuu | 2025-05-26T09:59:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-26T09:54:27Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Mekuu
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Humberto21/Maria | Humberto21 | 2025-05-26T09:57:47Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-26T09:57:47Z | ---
license: apache-2.0
---
|
LioAlezard/ST4R20-00 | LioAlezard | 2025-05-26T09:55:09Z | 0 | 0 | null | [
"license:other",
"region:us"
]
| null | 2025-05-26T08:31:05Z | ---
license: other
license_name: other
license_link: LICENSE
---
|
LutaoJiang/DiMeR | LutaoJiang | 2025-05-26T09:53:59Z | 0 | 6 | null | [
"Mesh",
"Image-to-3D",
"Text-to-3D",
"3D",
"image-to-3d",
"dataset:allenai/objaverse",
"arxiv:2504.17670",
"license:apache-2.0",
"region:us"
]
| image-to-3d | 2025-04-28T13:00:15Z | ---
license: apache-2.0
datasets:
- allenai/objaverse
pipeline_tag: image-to-3d
tags:
- Mesh
- Image-to-3D
- Text-to-3D
- 3D
---
# DiMeR: Disentangled Mesh Reconstruction Model
Paper: https://arxiv.org/pdf/2504.17670
Project Page: https://lutao2021.github.io/DiMeR_page/
Github: https://github.com/lutao2021/DiMeR
Space: https://huggingface.co/spaces/LTT/DiMeR

# Abstract
With the advent of large-scale 3D datasets, feed-forward 3D generative models, such as the Large Reconstruction Model (LRM), have gained significant attention and achieved remarkable success. However, we observe that RGB images often lead to conflicting training objectives and lack the necessary clarity for geometry reconstruction.
In this paper, we revisit the inductive biases associated with mesh reconstruction and introduce DiMeR, a novel disentangled dual-stream feed-forward model for sparse-view mesh reconstruction. The key idea is to disentangle both the input and framework into geometry and texture parts, thereby reducing the training difficulty for each part according to the Principle of Occam's Razor. Given that normal maps are strictly consistent with geometry and accurately capture surface variations, we utilize normal maps as exclusive input for the geometry branch to reduce the complexity between the network's input and output. Moreover, we improve the mesh extraction algorithm to introduce 3D ground truth supervision.
As for texture branch, we use RGB images as input to obtain the textured mesh. Overall, DiMeR demonstrates robust capabilities across various tasks, including sparse-view reconstruction, single-image-to-3D, and text-to-3D. Numerous experiments show that DiMeR significantly outperforms previous methods, achieving over 30% improvement in Chamfer Distance on the GSO and OmniObject3D dataset. |
SSSHOGONE22/code-search-net-tokenizer | SSSHOGONE22 | 2025-05-26T09:53:15Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-26T08:49:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dhchoi/manchu-llama32-11b-vision-mlx | dhchoi | 2025-05-26T09:52:45Z | 0 | 0 | null | [
"safetensors",
"mllama",
"region:us"
]
| null | 2025-05-26T09:47:29Z | # Llama 3.2 11B Vision MLX
This is a MLX-optimized version of the Llama 3.2 11B Vision model, converted for use with Apple Silicon devices.
## Model Description
- **Model Type**: Vision-Language Model
- **Architecture**: MllamaForConditionalGeneration
- **Parameters**: 11B
- **Framework**: MLX (Apple Silicon optimized)
- **Base Model**: Llama 3.2 11B Vision
## Features
- Optimized for Apple Silicon (M1/M2/M3/M4) devices
- Vision and text understanding capabilities
- Efficient inference with MLX framework
- Support for image-text conversations
## Usage
```python
import mlx.core as mx
from mlx_vlm import load, generate
# Load the model
model, processor = load("path/to/model")
# Generate text from image and prompt
image_path = "your_image.jpg"
prompt = "Describe this image"
response = generate(model, processor, image_path, prompt)
print(response)
```
## Requirements
- macOS with Apple Silicon
- MLX framework
- MLX-VLM package
## Installation
```bash
pip install mlx mlx-vlm
```
## Model Details
- **Vision Config**:
- Image size: 560x560
- Patch size: 14
- Hidden size: 1280
- Layers: 32
- **Text Config**:
- Vocab size: 128,256
- Hidden size: 4096
- Layers: 40
- Attention heads: 32
## License
Please refer to the original Llama 3.2 license terms.
## Citation
If you use this model, please cite the original Llama 3.2 paper and acknowledge the MLX conversion.
|
DngBack/TinyStories_Qwen3_4b_llm_head | DngBack | 2025-05-26T09:52:31Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-26T09:52:07Z | ---
license: apache-2.0
---
|
AstroMLab/AstroSage-70B | AstroMLab | 2025-05-26T09:52:21Z | 15 | 3 | null | [
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"dataset:teknium/OpenHermes-2.5",
"dataset:nvidia/Llama-Nemotron-Post-Training-Dataset",
"arxiv:2505.17592",
"arxiv:2411.09012",
"arxiv:2407.04420",
"base_model:meta-llama/Llama-3.1-70B",
"base_model:finetune:meta-llama/Llama-3.1-70B",
"license:llama3.1",
"region:us"
]
| text-generation | 2025-05-18T23:32:31Z | ---
license: llama3.1
datasets:
- teknium/OpenHermes-2.5
- nvidia/Llama-Nemotron-Post-Training-Dataset
language:
- en
base_model:
- meta-llama/Llama-3.1-70B
pipeline_tag: text-generation
---
---
**Model Name:** AstroSage-70B
**Version:** 1.0
**Release Date:** 2025-05-20
**Developed by:** AstroMLab (Tijmen de Haan, Yuan-Sen Ting, Tirthankar Ghosal, Tuan Dung Nguyen, Alberto Accomazzi, Emily Herron, Vanessa Lama, Azton Wells, Nesar Ramachandra, Rui Pan)
**Corresponding Contact:** Tijmen de Haan ([email protected])
**Funded by:**
* Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility at Oak Ridge National Laboratory (U.S. Department of Energy).
* Microsoftβs Accelerating Foundation Models Research (AFMR) program.
* World Premier International Research Center Initiative (WPI), MEXT, Japan.
* National Science Foundation (NSF).
* UChicago Argonne LLC, Operator of Argonne National Laboratory (U.S. Department of Energy).
**License:** Llama 3.1 Community License
**Reference Paper:** Tijmen de Haan et al. (2025). "AstroMLab 4: Benchmark-Topping Performance in Astronomy Q&A with a 70B-Parameter Domain-Specialized Reasoning Model" https://arxiv.org/abs/2505.17592
---
### Model Details
**Model Type:** Autoregressive transformer-based LLM, specialized in astronomy, astrophysics, space science, astroparticle physics, cosmology, and astronomical instrumentation.
**Base Model:** Meta-Llama-3.1-70B
**Model Architecture:** AstroSage-70B is a fine-tuned derivative of the Meta-Llama-3.1-70B architecture, making no architectural changes. The Llama-3.1-70B-Instruct tokenizer is also used without modification.
**Context Length:** Fine-tuned on 8192-token sequences. Base model was trained to 128k context length.
---
### Model Description
**Overview:**
AstroSage-70B is a large-scale, domain-specialized language model tailored for research and education in astronomy, astrophysics, space science, cosmology, and astronomical instrumentation.
It builds on the Llama-3.1-70B foundation model, enhanced through extensive continued pre-training (CPT) on a vast corpus of astronomical literature, further refined with supervised fine-tuning (SFT) on instruction-following datasets, and finally enhanced via parameter averaging (model merging) with other popular fine tunes.
AstroSage-70B aims to achieve state-of-the-art performance on astronomy-specific tasks, providing researchers, students, and enthusiasts with an advanced AI assistant.
This 70B parameter model represents a significant scaling up from the AstroSage-8B model. The primary enhancements from the AstroSage-8B model are:
1. Stronger base model, higher parameter count for increased capacity
3. Improved datasets
4. Improved learning hyperparameters
5. Reasoning capability (can be enabled or disabled at inference time)
**Training Lineage**
1. **Base Model:** Meta-Llama-3.1-70B.
2. **Continued Pre-Training (CPT):** The base model underwent **2.5 epochs** of CPT (168k GPU-hours) on a specialized astronomy corpus (details below, largely inherited from AstroSage-8B) to produce AstroSage-70B-CPT. This stage imbues domain-specific knowledge and language nuances.
3. **Supervised Fine-Tuning (SFT):** AstroSage-70B-CPT was then fine-tuned for **0.6 epochs** (13k GPU-hours) using astronomy-relevant and general-purpose instruction-following datasets, resulting in AstroSage-70B-SFT.
4. **Final Mixture:** The released AstroSage-70B model is created via parameter averaging / model merging:
* DARE-TIES with `rescale: true` and `lambda: 1.2`
* AstroSage-70B-CPT designated as the "base model"
* **70% AstroSage-70B-SFT** (density 0.7)
* **15% Llama-3.1-Nemotron-70B-Instruct** (density 0.5)
* **7.5% Llama-3.3-70B-Instruct** (density 0.5)
* **7.5% Llama-3.1-70B-Instruct** (density 0.5)
**Intended Use:**
Like AstroSage-8B, this model can be used for a variety of LLM application, including
* Providing factual information and explanations in astronomy, astrophysics, cosmology, and instrumentation.
* Assisting with literature reviews and summarizing scientific papers.
* Answering domain-specific questions with high accuracy.
* Brainstorming research ideas and formulating hypotheses.
* Assisting with programming tasks related to astronomical data analysis.
* Serving as an educational tool for learning astronomical concepts.
* Potentially forming the core of future agentic research assistants capable of more autonomous scientific tasks.
We hope that with the enhanced intelligence and reasoning ability of AstroSage-70B compared to AstroSage-8B you can find additional use cases.
---
### Training Data
AstroSage-70B's training data is split into pre-training (Continued Pre-Training, or CPT for short) and post-training (Supervised Fine-Tuning, or SFT).
**Continued Pre-Training (CPT) Data:**
The CPT data for AstroSage-70B starts with AstroSage-8B training dataset (see https://arxiv.org/abs/2411.09012 for more detail), and adds:
* **Data Processing & Cleaning:** We apply `ftfy` post-processing.
* **Replay Data:** We add a random selection of FineWeb samples to each training epoch. The added FineWeb samples are different for each epoch of training.
**Supervised Fine-Tuning (SFT) Data for `AstroSage-70B-SFT`:**
The SFT dataset is a diverse mix of astronomy-specific and general-purpose instruction-following data, totaling approximately 8.7 GB and over 7.5 million entries. The components are:
* **NVIDIA Reasoning, Science:** 2048.0 MB (252,847 entries)
* **AstroSage Q&A:** 1983.7 MB (4,683,569 entries) - Astronomy-specific question-answer pairs, see https://arxiv.org/abs/2411.09012 / https://doi.org/10.1038/s41598-025-97131-y
* **Teknium OpenHermes 2.5:** 1621.0 MB (1,001,551 entries)
* **cosmosage Q&A:** 616.0 MB (1,354,733 entries) - Cosmology-specific question-answer pairs, see https://arxiv.org/abs/2407.04420 / https://doi.org/10.1016/j.ascom.2025.100934
* **NVIDIA Reasoning, Code:** 600.0 MB (12,170 entries)
* **NVIDIA Reasoning, Math:** 600.0 MB (33,236 entries)
* **NVIDIA Reasoning, Chat:** 243.1 MB (36,395 entries)
* **Miscellaneous Other Astronomy & Instruction Data:** 6 additional datasets totalling 810.2 MB (125,923 entries)
---
### Evaluation
Quantitative evaluation using the AstroMLab-1 benchmark gives state-of-the-art performance, getting 86.2% of questions correct. This score is higher than all other models at the time of writing (May, 2025).

---
### Bias, Risks, and Limitations
* **Knowledge Cutoff:** The model's knowledge cutoff is a mixture of the knowledge cutoff of the foundation model Llama-3.1-70B, which is December 2023, and the cutoff for the astro-ph papers used in the training dataset, which is January 2024. The model will not be aware of developments in astronomy after that cutoff.
* **Hallucinations & Factual Accuracy:** All LLMs, including AstroSage-70B, can generate outputs that sound plausible but are incorrect or nonsensical. All scientific or factual claims must be verified by users.
* **Inherited Biases:** The model may reflect biases present in its extensive training data (e.g., from scientific literature, general web data).
* **Errors in Training Data:** After training completed, the AstroSage Q&A SFT dataset was found to have a non-trivial proportion of references to "the article" or "the text", which are artifacts of how the synthetic training data was generated. The insufficient cleaning resulted in AstroSage-70B inadvertently being trained to occasionally refer to information that is not available in the current conversation. We find that this failure mode is however, fairly rare.
* **Depth of Specialization:** AstroSage-70B is highly specialized, having been trained on all astro-ph papers from April 2007 to January 2024. However, LLM training does not result in complete memorization of all facts. We find it excels in topics that are discussed in approximately five or more papers.
---
### Chat Template
AstroSage-70B follows the Llama-3.1 chat template. For example:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are an expert in astronomy, astrophysics, space science, cosmology, and astronomical instrumentation. Your role is provide helpful, factual answers to the user's query.<|eot_id|><|start_header_id|>user<|end_header_id|>
Explain the ISW effect.<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
---
### Enabling Reasoning (a.k.a. Thinking, Chain-of-Thought)
Like popular models such as o1, QwQ and DeepSeek-R1, AstroSage-70B is capable of reasoning through a problem before giving an answer. To enable this:
1. Set the **system prompt** to: `detailed thinking on`
2. Prefill the **assistant response** with `<think>`
To enable reasoning for the example above, you would give
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
detailed thinking on<|eot_id|><|start_header_id|>user<|end_header_id|>
Explain the ISW effect.<|eot_id|><|start_header_id|>assistant<|end_header_id|>
<think>
``` |
otaku2015zero/phi4-mini-instruct-3b84-lora | otaku2015zero | 2025-05-26T09:50:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-22T15:17:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TanAlexanderlz/ALL_RGBCROP_ori16F-8B16F-GACWDlr | TanAlexanderlz | 2025-05-26T09:47:41Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base-finetuned-kinetics",
"base_model:finetune:MCG-NJU/videomae-base-finetuned-kinetics",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| video-classification | 2025-05-26T08:40:03Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base-finetuned-kinetics
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ALL_RGBCROP_ori16F-8B16F-GACWDlr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ALL_RGBCROP_ori16F-8B16F-GACWDlr
This model is a fine-tuned version of [MCG-NJU/videomae-base-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4175
- Accuracy: 0.8563
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 960
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6704 | 0.05 | 48 | 0.6516 | 0.6098 |
| 0.5147 | 1.05 | 96 | 0.5519 | 0.6890 |
| 0.318 | 2.05 | 144 | 0.4741 | 0.7561 |
| 0.1843 | 3.05 | 192 | 0.4970 | 0.7988 |
| 0.0684 | 4.05 | 240 | 0.6018 | 0.7866 |
| 0.0323 | 5.05 | 288 | 0.6950 | 0.8293 |
| 0.0124 | 6.05 | 336 | 0.8211 | 0.7927 |
| 0.0023 | 7.05 | 384 | 0.8972 | 0.7927 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
outlookAi/UTIXxmftqQ | outlookAi | 2025-05-26T09:47:24Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-26T09:31:11Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: mlegs
---
# Utixxmftqq
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `mlegs` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "mlegs",
"lora_weights": "https://huggingface.co/outlookAi/UTIXxmftqQ/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('outlookAi/UTIXxmftqQ', weight_name='lora.safetensors')
image = pipeline('mlegs').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1500
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/outlookAi/UTIXxmftqQ/discussions) to add images that show off what youβve made with this LoRA.
|
DorianVH/ppo-LunarLander-v2 | DorianVH | 2025-05-26T09:43:03Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-05-26T09:42:46Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 280.78 +/- 17.12
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jonliebrx/joninstaai | jonliebrx | 2025-05-26T09:41:27Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-26T09:29:54Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Joninstaai
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/jonliebrx/joninstaai/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('jonliebrx/joninstaai', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/jonliebrx/joninstaai/discussions) to add images that show off what youβve made with this LoRA.
|
tartuNLP/Llammas-base-AEG | tartuNLP | 2025-05-26T09:40:45Z | 11 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:tartuNLP/Llammas-base",
"base_model:finetune:tartuNLP/Llammas-base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-10-02T13:49:41Z | ---
library_name: transformers
base_model:
- tartuNLP/Llammas-base
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
linoyts/Wan-VACE-14B-diffusers | linoyts | 2025-05-26T09:40:11Z | 25 | 1 | diffusers | [
"diffusers",
"safetensors",
"video-to-video",
"diffusers:WanVACEPipeline",
"region:us"
]
| null | 2025-05-21T12:01:22Z | ---
pipeline_tag: video-to-video
library_name: diffusers
--- |
howaardharris/hmh | howaardharris | 2025-05-26T09:39:27Z | 0 | 0 | null | [
"license:bsd-3-clause",
"region:us"
]
| null | 2025-05-26T09:39:27Z | ---
license: bsd-3-clause
---
|
Voidstep/gravel_qp68z | Voidstep | 2025-05-26T09:37:44Z | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
]
| any-to-any | 2025-05-26T09:34:48Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
One-RL-to-See-Them-All/Orsta-32B-0321 | One-RL-to-See-Them-All | 2025-05-26T09:37:13Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"VLM",
"multimodal",
"conversational",
"en",
"dataset:One-RL-to-See-Them-All/Orsta-Data-47k",
"arxiv:2505.18129",
"base_model:Qwen/Qwen2.5-VL-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-32B-Instruct",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-05-25T01:12:45Z | ---
license: mit
language:
- en
pipeline_tag: image-text-to-text
tags:
- VLM
- multimodal
library_name: transformers
base_model:
- Qwen/Qwen2.5-VL-32B-Instruct
datasets:
- One-RL-to-See-Them-All/Orsta-Data-47k
---
# One RL to See Them All
* π **GitHub Repo:** [MiniMax-AI/One-RL-to-See-Them-All](https://github.com/MiniMax-AI/One-RL-to-See-Them-All)
* π **Paper (arXiv):** [V-Triune: One RL to See Them All (arXiv:2505.18129)](https://arxiv.org/abs/2505.18129)
* πΎ **Dataset:** [Orsta-Data-47k on Hugging Face](https://huggingface.co/datasets/One-RL-to-See-Them-All/Orsta-Data-47k)
## Model Overview
**Orsta-32B-0321** is a cutting-edge vision-language model (VLM) designed to achieve superior performance across a wide spectrum of both visual reasoning and visual perception tasks. This model is a result of post-training with [**V-Triune**](https://github.com/MiniMax-AI/One-RL-to-See-Them-All), our novel unified reinforcement learning (RL) system.
The V-Triune system enables VLMs to be jointly optimized on diverse multimodal tasks within a single, cohesive training pipeline. Orsta-32B-0321 has been specifically trained using V-Triune on a carefully curated set of eight challenging visual tasks, fostering robust generalization and enhanced capabilities.
## Training with V-Triune
Orsta-32B-0321's advanced abilities stem from its training with the V-Triune system. Key aspects of its training include:
* **Unified RL Framework (V-Triune):** V-Triune is a Visual Triple-Unified Reinforcement Learning system featuring three core complementary components:
* *Sample-Level Data Formatting* (to unify diverse task inputs)
* *Verifier-Level Reward Computation* (to deliver custom rewards via specialized verifiers)
* *Source-Level Metric Monitoring* (to diagnose problems at the data-source level)
Β * It also incorporates an innovative *Dynamic IoU reward* mechanism, crucial for optimizing visual perception tasks. You can find more details in our paper: [V-Triune](https://arxiv.org/abs/2505.18129)
* **Diverse Joint Task Optimization:** Orsta-32B-0321 was jointly optimized on the following eight visual tasks:
* *Visual Reasoning Tasks:* Mathematics, Science Question Answering, Chart Understanding, and Puzzle Solving.
* *Visual Perception Tasks:* Object Detection, Visual Grounding, Optical Character Recognition (OCR), and Object Counting.
This comprehensive training allows Orsta-32B-0321 to develop a deeper understanding of visual content and its relation to textual prompts, excelling in tasks that require intricate reasoning and precise perception.
## Performance
| Model | Knowledge | Mathematics | Perception | Coding | Info. Ex. | Planning | Science | Metrics | MEGA-Bench<br>Core |
| :--------------------------------------------- | ----------: | ------------: | -----------: | -------: | ----------: | ---------: | --------: | --------: | ------------------: |
| QwenVL-2.5-32B-0321 | 8.48 | 12.62 | 11.99 | 13.59 | 15.44 | 8.61 | 16.78 | 14.91 | 11.87 |
| MM-Eureka-32B π‘ | 12.20 | 20.19 | 21.88 | 15.86 | 21.23 | 15.47 | 19.95 | 22.77 | 18.57 |
| VL-Rethinker-32B π‘ | 12.16 | 28.09 | 22.99 | 11.89 | 21.50 | 15.09 | 28.10 | 15.73 | 19.41 |
| **Orsta-32B-0321 (Ours) π‘** | **21.33** | **28.55** | **32.23** | **19.44**| **26.38** | **17.78** | **33.20** | **24.18** | **25.94** |
| - | - | - | - | - | - | - | - | - | - |
| Ξ (Ours - Backbone) | +12.9 | +15.9 | +20.2 | +5.9 | +10.9 | +9.2 | +16.4 | +9.3 | +14.1 |
## How to Use
**Orsta-32B-0321** is developed by post-training the [**Qwen2.5-VL-32B-Instruct (0321 checkpoint)**](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct/tree/98948557b47f3244ac2764806ddd334ce3c684f9) model using our V-Triune reinforcement learning system. The Qwen2.5-VL-32B-Instruct (0321 checkpoint) is a publicly available baseline known for its reliable core reasoning abilities, alongside certain recognized limitations in perception and output formatting (which have been addressed in subsequent Qwen releases). Applying V-Triune to this specific baseline demonstrates its powerful post-training capability to unlock the model's inherent potential and significantly elevate its performance by refining and amplifying existing strengths.
Consequently, the core usage of **Orsta-32B-0321**, particularly regarding input formatting and model interaction, largely follows the established patterns of the Qwen2.5-VL series. Users familiar with Qwen2.5-VL models should find the interface intuitive.
For comprehensive details on the general capabilities of Qwen2.5-VL models, including multi-turn dialogue format and image input specifics, we recommend referring to the official [Qwen2.5-VL series documentation](https://huggingface.co/Qwen/Qwen2.5-VL-32B-Instruct) (please ensure to consult information relevant to the 32B Instruct version).
## Citation π
If you use Orsta-32B-0321 or the V-Triune system in your research, please cite our work:
```bibtex
@article{ma2025one,
title={One RL to See Them All: Visual Triple Unified Reinforcement Learning},
author={Ma, Yan and Du, Linge and Shen, Xuyang and Chen, Shaoxiang and Li, Pengfei and Ren, Qibing and Ma, Lizhuang and Dai, Yuchao and Liu, Pengfei and Yan, Junjie},
journal={arXiv preprint arXiv:2505.18129},
year={2025}
}
``` |
tartuNLP/nllb1.3-smugri4-v0.01 | tartuNLP | 2025-05-26T09:36:55Z | 14 | 3 | transformers | [
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"nllb",
"translation",
"fkv",
"izh",
"kca",
"koi",
"kpv",
"krl",
"liv",
"lud",
"mdf",
"mhr",
"mns",
"mrj",
"myv",
"olo",
"sjd",
"sje",
"sju",
"sma",
"sme",
"smj",
"smn",
"sms",
"udm",
"vep",
"vot",
"vro",
"deu",
"eng",
"est",
"fin",
"hun",
"lvs",
"nor",
"rus",
"base_model:facebook/nllb-200-1.3B",
"base_model:finetune:facebook/nllb-200-1.3B",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2025-03-25T10:52:21Z | ---
language:
- fkv
- izh
- kca
- koi
- kpv
- krl
- liv
- lud
- mdf
- mhr
- mns
- mrj
- myv
- olo
- sjd
- sje
- sju
- sma
- sme
- smj
- smn
- sms
- udm
- vep
- vot
- vro
- deu
- eng
- est
- fin
- hun
- lvs
- nor
- rus
language_details: >-
fkv_Latn, izh_Latn, krl_Latn, liv_Latn, lud_Latn, olo_Latn, sje_Latn,
sju_Latn, sma_Latn, sme_Latn, smj_Latn, smn_Latn, sms_Latn, vep_Latn,
vot_Latn, vro_Latn, kca_Cyrl, koi_Cyrl, kpv_Cyrl, mdf_Cyrl, mhr_Cyrl,
mns_Cyrl, mrj_Cyrl, myv_Cyrl, sjd_Cyrl, udm_Cyrl, eng_Latn est_Latn, fin_Latn,
hun_Latn, lvs_Latn, nor_Latn, rus_Cyrl
library_name: transformers
tags:
- nllb
- transformers
pipeline_tag: translation
license: cc-by-4.0
base_model:
- facebook/nllb-200-1.3B
---
# Smugri-tuned NLLB-1.3b, v0.01
This is a fine-tune of NLLB-1.3b with parallel data for 29 Finno-Ugric languages.
It supports different dialect/variety generation for some of the languages, more info below.
Info on used data and other details: soon. **The training of this model is in progress**,
there are several known problems and overall quality is not tested yet. So far only parallel
data was taken into training, more dialects are to come after monolingual/synthetic data is added.
Usage in Python, to translate from English to Veps (New written Veps dialect/variety):
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("tartuNLP/nllb1.3-smugri4-v0.01")
tokenizer = AutoTokenizer.from_pretrained("tartuNLP/nllb1.3-smugri4-v0.01")
input_text = "<New written Veps> This is a short example sentence."
source_lang = "eng_Latn"
target_lang = "vep_Latn"
tokenizer.src_lang = source_lang
input_tokenized = tokenizer(input_text, return_tensors="pt")
output_raw = model.generate(**input_tokenized, forced_bos_token_id=tokenizer.convert_tokens_to_ids(target_lang))
output = tokenizer.decode(output_raw[0], skip_special_tokens=True)
print(output) # should be 'Nece om lΓΌhΓΌd ozutezsana.'
# for '<Central Eastern Veps>' the output becomes 'Nece om lΓΌhΓΌd naverzβ sanond.'
```
## Supported languages
- `est_Latn` (Estonian), `fin_Latn` (Finnish), `fkv_Latn` (Kven), `izh_Latn` (Izhorian*), `krl_Latn` (Proper Karelian*), `liv_Latn` (Livonian), `lud_Latn` (Ludian*), `olo_Latn` (Livvi-Karelian*), `vep_Latn` (Veps*), `vot_Latn` (Votic*), `vro_Latn` (VΓ΅ro)
- `sje_Latn` (Pite Sami), `sju_Latn` (Ume Sami), `sma_Latn` (Southern Sami), `sme_Latn` (Northern Sami), `smj_Latn` (Lule Sami), `smn_Latn` (Inari Sami), `sms_Latn` (Skolt Sami), `sjd_Cyrl` (Kildin Sami*)
- `kpv_Cyrl` (Komi-Zyrian), `koi_Cyrl` (Komi-Permyak), `udm_Cyrl` (Udmurt)
- `mdf_Cyrl` (Moksha), `myv_Cyrl` (Erzya)
- `mhr_Cyrl` (Meadow Mari), `mrj_Cyrl` (Hill Mari)
- `hun_Latn` (Hungarian), `kca_Cyrl` (Khanty*), `mns_Cyrl` (Mansi)
- `eng_Latn` (English), `lvs_Latn` (Latvian), `rus_Cyrl` (Russian), `nor_Latn` (Norwegian)
## Supported dialects
- for Izhorian: `alal` (Lower Luga), `soik` (Soikkola)
- for Votic: `I`, `J`, `Ja`, `K`, `KΓ΅`, `Ke`, `Ko`, `L`, `Li`, `Lu`, `M`, `P`, `Po`, `R`, `Ra`, `S`, `U`, `V` (explanation: https://arhiiv.eki.ee/dict/vadja/lisad/v_lyhendid.pdf)
- for Karelian Proper: `Dyorzha`, `Ilomantsi`, `Keret`, `Kestenga`, `Kontokki`, `Korbiselga`, `Maslozero`, `Myandyselga`, `New written Tver`, `New written karelian`, `Oulanga`, `Padany`, `Panozero`, `Poduzhemye`, `Porosozero`, `Reboly`, `Rugozero`, `Suistamo`, `Suoyarvi`, `Tikhtozero`, `Tikhvin`, `Tolmachi`, `Tunguda`, `Uhta`, `Valdai`, `Vesyegonsk`, `Voknavolok`, `Vychetaibola`, `Yushkozero`
- for Ludian: `Central Ludian (Munozero)`, `Mikhailovskoye`, `New written Ludian`, `Northern Ludian (Kondopoga)`, `Southern Ludian (Svjatozero)`, `Miikul` (Central Ludian)
- for Livvi-Karelian: `Impilahti`, `Kondushi`, `Kotkozero`, `Nekkula`, `New written Livvic`, `Rypushkalitsa`, `Salmi`, `Suoyarvi`, `Syamozero`, `Tulmozero`, `Vedlozero`, `Vidlitsa`
- for Veps: `Central Eastern Veps`, `Central Western Veps`, `New written Veps`, `Northern Veps`, `Southern Veps`
- for Kildin Sami: `orth1`
- for Khanty: `kazym` (Kazym), `shuryshkary` (Shuryshkar) |
Nerva1228/youbi | Nerva1228 | 2025-05-26T09:36:25Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-26T09:36:23Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: youbi
---
# Youbi
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `youbi` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "youbi",
"lora_weights": "https://huggingface.co/Nerva1228/youbi/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Nerva1228/youbi', weight_name='lora.safetensors')
image = pipeline('youbi').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 5e-05
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Nerva1228/youbi/discussions) to add images that show off what youβve made with this LoRA.
|
tartuNLP/xls-r-300m-et-children | tartuNLP | 2025-05-26T09:35:51Z | 19 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"et",
"base_model:TalTechNLP/xls-r-300m-et",
"base_model:finetune:TalTechNLP/xls-r-300m-et",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-01-30T11:44:55Z | ---
license: cc-by-4.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: xls-r-300m-et-children
results: []
language:
- et
library_name: transformers
base_model:
- TalTechNLP/xls-r-300m-et
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-r-300m-et-children
This model is a fine-tuned version of [TalTechNLP/xls-r-300m-et](https://huggingface.co/TalTechNLP/xls-r-300m-et) on an Estonian children's speech dataset.
More information about the model's performance and the data used for evaluation and training:
Luhtaru, Agnes; Jaaska, Rauno; KruusamΓ€e, Karl; Fishel, Mark (2023). Automatic Transcription for Estonian Childrenβs Speech. In: Proceedings of the 24th Nordic Conference on Computational Linguistics. [https://openreview.net/forum?id=xbPTfBIUby](https://openreview.net/forum?id=xbPTfBIUby)
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3485 | 1.61 | 100 | 0.2034 | 0.1782 |
| 0.1914 | 3.22 | 200 | 0.1818 | 0.1606 |
| 0.1641 | 4.83 | 300 | 0.1770 | 0.1572 |
| 0.1479 | 6.45 | 400 | 0.1745 | 0.1546 |
| 0.1374 | 8.06 | 500 | 0.1751 | 0.1538 |
| 0.1244 | 9.67 | 600 | 0.1734 | 0.1518 |
| 0.1211 | 11.29 | 700 | 0.1753 | 0.1508 |
| 0.1096 | 12.9 | 800 | 0.1758 | 0.1483 |
| 0.1059 | 14.51 | 900 | 0.1771 | 0.1469 |
| 0.0991 | 16.13 | 1000 | 0.1776 | 0.1469 |
| 0.0965 | 17.74 | 1100 | 0.1759 | 0.1469 |
| 0.0944 | 19.35 | 1200 | 0.1784 | 0.1459 |
| 0.0902 | 20.96 | 1300 | 0.1799 | 0.1469 |
| 0.0867 | 22.58 | 1400 | 0.1814 | 0.1440 |
| 0.0817 | 24.19 | 1500 | 0.1828 | 0.1438 |
| 0.0802 | 25.8 | 1600 | 0.1845 | 0.1438 |
| 0.0762 | 27.42 | 1700 | 0.1843 | 0.1431 |
| 0.0774 | 29.03 | 1800 | 0.1839 | 0.1432 |
| 0.0741 | 30.64 | 1900 | 0.1843 | 0.1442 |
| 0.0718 | 32.26 | 2000 | 0.1846 | 0.1429 |
| 0.07 | 33.86 | 2100 | 0.1852 | 0.1429 |
| 0.0692 | 35.48 | 2200 | 0.1872 | 0.1435 |
| 0.0671 | 37.1 | 2300 | 0.1874 | 0.1433 |
| 0.0667 | 38.7 | 2400 | 0.1887 | 0.1435 |
| 0.066 | 40.32 | 2500 | 0.1880 | 0.1422 |
| 0.0628 | 41.93 | 2600 | 0.1897 | 0.1426 |
| 0.0643 | 43.54 | 2700 | 0.1910 | 0.1428 |
| 0.0643 | 45.16 | 2800 | 0.1900 | 0.1431 |
| 0.0638 | 46.77 | 2900 | 0.1900 | 0.1427 |
| 0.0601 | 48.38 | 3000 | 0.1911 | 0.1431 |
| 0.0593 | 49.99 | 3100 | 0.1914 | 0.1432 |
| 0.0606 | 51.61 | 3200 | 0.1912 | 0.1433 |
| 0.0609 | 53.22 | 3300 | 0.1912 | 0.1431 |
| 0.0587 | 54.83 | 3400 | 0.1921 | 0.1429 |
| 0.0567 | 56.45 | 3500 | 0.1924 | 0.1430 |
| 0.0611 | 58.06 | 3600 | 0.1927 | 0.1431 |
| 0.0581 | 59.67 | 3700 | 0.1926 | 0.1430 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.1+rocm5.1.1
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2 |
quickstep3621/dippy-v11-1-5 | quickstep3621 | 2025-05-26T09:33:18Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"BifrΓΆst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-26T09:33:13Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, youβre required to review and agree to
Googleβs usage license. To do this, please ensure youβre logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- BifrΓΆst
- Bifrost
- code
---
## BifrΓΆst-27B

BifrΓΆst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, BifrΓΆst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** BifrΓΆst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
BifrΓΆst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- BifrΓΆst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
tartuNLP/Llammas-base | tartuNLP | 2025-05-26T09:32:29Z | 60 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"et",
"en",
"arxiv:2404.04042",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-16T08:45:07Z | ---
language:
- et
- en
pipeline_tag: text-generation
base_model:
- meta-llama/Llama-2-7b-hf
---
# LLammas-base π
Llama-2-7B with continued pre-training of 5B tokens of CulturaX (75% Estonian, 25% English documents).
This model is also instruction-tuned resulting in [Llammas](https://huggingface.co/tartuNLP/Llammas).
More details in our [paper](https://arxiv.org/abs/2404.04042).
### Citation
```
@misc{kuulmets2024teaching,
title={Teaching Llama a New Language Through Cross-Lingual Knowledge Transfer},
author={Hele-Andra Kuulmets and Taido Purason and Agnes Luhtaru and Mark Fishel},
year={2024},
eprint={2404.04042},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
bansarivyas3/nbhgty | bansarivyas3 | 2025-05-26T09:30:29Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2025-05-26T09:30:27Z | ---
license: creativeml-openrail-m
---
|
RajeevanL/distilled_XLMRoberta_153_v5 | RajeevanL | 2025-05-26T09:30:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| question-answering | 2025-05-26T09:29:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
memengoc/chatgpt3.5nn | memengoc | 2025-05-26T09:29:52Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openchat/openchat-3.5-0106",
"base_model:adapter:openchat/openchat-3.5-0106",
"region:us"
]
| null | 2025-05-26T09:26:21Z | ---
base_model: openchat/openchat-3.5-0106
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
JavaneseHonorifics/Unggah-Ungguh-Javanese-GPT2-Classifier | JavaneseHonorifics | 2025-05-26T09:29:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-classification",
"jv",
"dataset:JavaneseHonorifics/Unggah-Ungguh",
"arxiv:2502.20864",
"base_model:w11wo/javanese-gpt2-small-imdb",
"base_model:finetune:w11wo/javanese-gpt2-small-imdb",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-05-26T09:20:29Z | ---
license: cc-by-nc-4.0
language:
- jv
datasets:
- JavaneseHonorifics/Unggah-Ungguh
base_model:
- w11wo/javanese-gpt2-small-imdb
pipeline_tag: text-classification
library_name: transformers
---
# Unggah-Ungguh-Javanese-GPT2-Classifier
Unggah-Ungguh-Javanese-GPT2-Classifier is part of the Unggah-Ungguh's model family, a classifier model for Javanese Honorific Classification task that was mentioned in "Do Language Models Understand Honorific Systems in Javanese?". Check out [our paper](https://arxiv.org/abs/2502.20864) for more information!
## Model description
- **Model type**: A classifier model trained on a highly curated Unggah-Ungguh dataset that represent Javanese Honorific rules and systems.
- **Language(s) NLP**: Javanese
- **License:** CC-BY-NC 4.0
- **Finetuned from model:** w11wo/javanese-distilbert-small-imdb
## Model Sources
- **Project Page:** https://javanesehonorifics.github.io/
- **Repository:** https://github.com/JavaneseHonorifics
- **Paper:** https://arxiv.org/abs/2502.20864
## Using the model
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_path = "JavaneseHonorifics/Unggah-Ungguh-Javanese-GPT2-Classifier"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForSequenceClassification.from_pretrained(model_path)
INPUT_TEXT = "Mbak Srini mangan pecel ajange pincuk"
tokenized_input = tokenizer([INPUT_TEXT], return_tensors="pt", truncation=True, padding=True)
with torch.no_grad():
outputs = model(**tokenized_input)
y_pred = outputs.logits.argmax(-1)
print("Predicted class:", y_pred.item())
```
## License and Use
Unggah-Ungguh is licensed under the CC-BY-NC 4.0
## Citation
```bibtex
@article{farhansyah2025language,
title={Do Language Models Understand Honorific Systems in Javanese?},
author={Farhansyah, Mohammad Rifqi and Darmawan, Iwan and Kusumawardhana, Adryan and Winata, Genta Indra and Aji, Alham Fikri and Wijaya, Derry Tanti},
journal={arXiv preprint arXiv:2502.20864},
year={2025}
}
``` |
akashmadisetty/fine-tuned-translation-qwen | akashmadisetty | 2025-05-26T09:25:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-1.7B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-1.7B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-26T09:25:29Z | ---
base_model: unsloth/Qwen3-1.7B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** akashmadisetty
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-1.7B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
primeline/whisper-tiny-german-1224 | primeline | 2025-05-26T09:25:17Z | 589 | 9 | transformers | [
"transformers",
"onnx",
"safetensors",
"whisper",
"automatic-speech-recognition",
"de",
"arxiv:2409.03137",
"doi:10.57967/hf/3826",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2024-11-23T18:20:51Z | ---
license: apache-2.0
language:
- de
library_name: transformers
pipeline_tag: automatic-speech-recognition
---
### Summary
This model card provides information about a model based on the tiny whisper architecture that has been trained for speech recognition in German.
Whisper is a powerful speech recognition platform developed by OpenAI.
### Applications
This model can be used in various application areas, including
- Transcription of spoken German language
- Voice commands and voice control
- Automatic subtitling for German videos
- Voice-based search queries in German
- Dictation functions in word processing programs
## Evaluations - Word error rate
```
+-----------------------------------------+-------+-----------+----------------------------+---------------------+
| Model | All | Tuda-De | multilingual librispeech | common_voice_19_0 |
+=========================================+=======+===========+============================+=====================+
| openai-whisper-large-v3 | 3.28 | 7.86 | 2.85 | 3.46 |
+-----------------------------------------+-------+-----------+----------------------------+---------------------+
| openai-whisper-large-v3-turbo | 3.64 | 8.20 | 3.19 | 3.85 |
+-----------------------------------------+-------+-----------+----------------------------+---------------------+
| openai-whisper-medium | 5.49 | 11.13 | 5.04 | 5.53 |
+-----------------------------------------+-------+-----------+----------------------------+---------------------+
| primeline-whisper-tiny-german-1224 | 6.26 | 9.62 | 4.97 | 8.46 |
+-----------------------------------------+-------+-----------+----------------------------+---------------------+
| openai-whisper-small | 9.54 | 15.94 | 8.77 | 10.15 |
+-----------------------------------------+-------+-----------+----------------------------+---------------------+
| openai-whisper-base | 18.75 | 33.58 | 17.15 | 19.74 |
+-----------------------------------------+-------+-----------+----------------------------+---------------------+
| openai-whisper-tiny | 28.80 | 47.33 | 26.47 | 30.76 |
+-----------------------------------------+-------+-----------+----------------------------+---------------------+
```
| Size | Parameters |
|----------|------------|
| tiny | 39 M |
| base | 74 M |
| small | 244 M |
| medium | 769 M |
| large | 1550 M |
| large-v2 | 1550 M |
The results are calculated in December 2024 and may change over the time with updates to the eval corpus.
For always the newest results please check the code and dataset page.
The data and code for evaluations are available [here](https://huggingface.co/datasets/flozi00/asr-german-mixed-evals)
### Training data
The training data for this model includes a large amount of spoken German from various sources.
The data was carefully selected and processed to optimize recognition performance.
The dataset size is about 6.000 hours of public, proprietary and synthetic data.
### Training process
The training of the model was performed with the following hyperparameters
- Batch size: 32768
- Epochs: 48
- Learning rate: 1e-4
- Data augmentation: No
- Optimizer: [Ademamix](https://arxiv.org/abs/2409.03137)
### How to use
```python
import torch
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
from datasets import load_dataset
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model_id = "primeline/whisper-tiny-german-1224"
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
)
model.to(device)
processor = AutoProcessor.from_pretrained(model_id)
pipe = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
max_new_tokens=128,
chunk_length_s=30,
batch_size=16,
return_timestamps=True,
torch_dtype=torch_dtype,
device=device,
)
dataset = load_dataset("distil-whisper/librispeech_long", "clean", split="validation")
sample = dataset[0]["audio"]
result = pipe(sample)
print(result["text"])
```
## [About us](https://primeline-ai.com/en/)
[](https://primeline-ai.com/en/)
Your partner for AI infrastructure in Germany
Experience the powerful AI infrastructure that drives your ambitions in Deep Learning, Machine Learning & High-Performance Computing.
Optimized for AI training and inference.
Model author: [Florian Zimmermeister](https://huggingface.co/flozi00)
**Disclaimer**
```
This model is not a product of the primeLine Group.
It represents research conducted by [Florian Zimmermeister](https://huggingface.co/flozi00), with computing power sponsored by primeLine.
The model is published under this account by primeLine, but it is not a commercial product of primeLine Solutions GmbH.
Please be aware that while we have tested and developed this model to the best of our abilities, errors may still occur.
Use of this model is at your own risk. We do not accept liability for any incorrect outputs generated by this model.
``` |
TanAlexanderlz/ALL_RGBCROP_ori16F-8B16F | TanAlexanderlz | 2025-05-26T09:23:04Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base-finetuned-kinetics",
"base_model:finetune:MCG-NJU/videomae-base-finetuned-kinetics",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| video-classification | 2025-05-26T08:33:29Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base-finetuned-kinetics
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ALL_RGBCROP_ori16F-8B16F
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ALL_RGBCROP_ori16F-8B16F
This model is a fine-tuned version of [MCG-NJU/videomae-base-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6047
- Accuracy: 0.8443
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 768
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4388 | 0.125 | 96 | 0.4338 | 0.7988 |
| 0.2352 | 1.125 | 192 | 0.6832 | 0.7622 |
| 0.1411 | 2.125 | 288 | 0.8688 | 0.8476 |
| 0.0005 | 3.125 | 384 | 0.9177 | 0.8354 |
| 0.0002 | 4.125 | 480 | 1.0111 | 0.8354 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
Darkknight535/Contrl-Stheno-v1-8B | Darkknight535 | 2025-05-26T09:22:58Z | 7 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Sao10K/L3-8B-Stheno-v3.2",
"Delta-Vector/Control-Nanuq-8B",
"conversational",
"en",
"base_model:Delta-Vector/Control-Nanuq-8B",
"base_model:merge:Delta-Vector/Control-Nanuq-8B",
"base_model:Sao10K/L3-8B-Stheno-v3.2",
"base_model:merge:Sao10K/L3-8B-Stheno-v3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-25T08:26:12Z | ---
base_model:
- Sao10K/L3-8B-Stheno-v3.2
- Delta-Vector/Control-Nanuq-8B
tags:
- merge
- mergekit
- lazymergekit
- Sao10K/L3-8B-Stheno-v3.2
- Delta-Vector/Control-Nanuq-8B
language:
- en
library_name: transformers
---
<style>
ebody {
font-family: 'Quicksand', sans-serif;
background: linear-gradient(135deg, #FF69B4 0%, #800080 100%);
color: #FFFFFF;
margin: 0;
padding: 0;
font-size: 16px;
min-height: 100vh;
}
.container {
margin: 20px;
background-color: rgba(28, 14, 36, 0.95);
padding: 20px;
border-radius: 12px;
box-shadow: 0 4px 20px rgba(255, 105, 180, 0.4);
border: 1px solid rgba(255, 105, 180, 0.4);
outline: 1px solid rgba(255, 105, 180, 0.7);
outline-offset: -1px;
position: relative;
backdrop-filter: blur(10px);
}
.container::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(255, 105, 180, 0.98);
border-radius: 12px;
pointer-events: none;
animation: borderGlow 2s ease-in-out infinite;
}
@keyframes borderGlow {
0% {
box-shadow: 0 0 5px rgba(255, 105, 180, 0.98);
}
50% {
box-shadow: 0 0 20px rgba(255, 105, 180, 0.98);
}
100% {
box-shadow: 0 0 5px rgba(255, 105, 180, 0.98);
}
}
.header h1 {
font-size: 28px;
color: #FF69B4;
margin: 0 0 20px 0;
text-shadow: 0 0 15px rgba(255, 105, 180, 0.8);
letter-spacing: 1px;
}
.update-section {
margin-top: 30px;
}
.update-section h2, h2 {
font-size: 24px;
color: #FF69B4;
text-shadow: 0 0 15px rgba(255, 105, 180, 0.8);
letter-spacing: 0.5px;
}
.update-section p {
font-size: 16px;
line-height: 1.6;
color: #FFE1FF;
}
.info p {
color: #FFE1FF;
line-height: 1.6;
font-size: 16px;
}
.info img {
width: 100%;
border-radius: 10px;
margin-bottom: 15px;
box-shadow: 0 0 30px rgba(255, 105, 180, 0.5);
border: 1px solid rgba(255, 105, 180, 0.4);
outline: 1px solid rgba(255, 105, 180, 0.7);
outline-offset: -1px;
transition: transform 0.3s ease, box-shadow 0.3s ease;
}
.info img:hover {
transform: scale(1.01);
box-shadow: 0 0 40px rgba(255, 105, 180, 0.6);
}
a {
color: #00FFEE;
text-decoration: none;
transition: color 0.3s ease;
}
a:hover {
color: #FF1493;
}
.button {
display: inline-block;
background: linear-gradient(45deg, rgba(255, 105, 180, 0.9), rgba(128, 0, 128, 0.9));
color: #FFFFFF;
padding: 12px 24px;
border-radius: 5px;
cursor: pointer;
text-decoration: none;
transition: all 0.3s ease;
border: 1px solid rgba(255, 105, 180, 0.4);
}
.button:hover {
background: linear-gradient(45deg, rgba(255, 105, 180, 1), rgba(128, 0, 128, 1));
box-shadow: 0 0 20px rgba(255, 105, 180, 0.7);
transform: translateY(-2px);
}
pre {
background-color: rgba(28, 14, 36, 0.95);
padding: 15px;
border-radius: 5px;
overflow-x: auto;
border: 1px solid rgba(255, 20, 147, 0.3);
outline: 1px solid rgba(255, 20, 147, 0.6);
outline-offset: -1px;
}
code {
font-family: 'Courier New', monospace;
color: #FFE1FF;
}
.benchmark-container {
background: rgba(28, 14, 36, 0.95);
border: 1px solid rgba(255, 20, 147, 0.3);
border-radius: 12px;
padding: 20px;
margin: 20px 0;
position: relative;
overflow: hidden;
}
.benchmark-container::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(255, 20, 147, 0.98);
border-radius: 12px;
pointer-events: none;
animation: borderGlow 2s ease-in-out infinite;
}
.benchmark-grid {
display: grid;
grid-template-columns: repeat(4, 1fr);
gap: 15px;
}
.metric-box {
background: rgba(28, 14, 36, 0.95);
border: 1px solid rgba(255, 20, 147, 0.3);
border-radius: 8px;
padding: 15px;
display: flex;
flex-direction: column;
align-items: center;
text-align: center;
transition: transform 0.3s ease, box-shadow 0.3s ease;
}
.metric-box:hover {
transform: translateY(-2px);
box-shadow: 0 4px 15px rgba(255, 20, 147, 0.3);
}
.metric-box .label {
color: #00FFEE;
font-size: 14px;
margin-bottom: 8px;
font-weight: 500;
}
.metric-box .value {
color: #FFE1FF;
font-size: 18px;
font-weight: 600;
text-shadow: 0 0 5px rgba(255, 20, 147, 0.5);
}
.metrics-section {
margin-bottom: 30px;
}
.metrics-section details {
background: rgba(28, 14, 36, 0.95);
border: 1px solid rgba(255, 20, 147, 0.3);
border-radius: 8px;
padding: 15px;
margin-bottom: 15px;
}
.metrics-section summary {
color: #FF1493;
font-size: 20px;
cursor: pointer;
text-shadow: 0 0 5px rgba(255, 20, 147, 0.3);
outline: none;
padding: 5px 0;
}
.metrics-section summary::-webkit-details-marker {
display: none;
}
.core-metrics-grid {
display: grid;
grid-template-columns: repeat(4, 1fr);
gap: 15px;
margin-bottom: 20px;
}
.progress-metrics {
display: grid;
gap: 15px;
}
.progress-metric {
background: rgba(28, 14, 36, 0.95);
border: 1px solid rgba(255, 20, 147, 0.3);
border-radius: 8px;
padding: 15px;
transition: transform 0.3s ease;
}
.progress-metric:hover {
transform: translateY(-2px);
}
.progress-label {
display: flex;
justify-content: space-between;
margin-bottom: 8px;
color: #00FFEE;
font-size: 14px;
}
.progress-value {
color: #FFE1FF;
}
.progress-bar {
width: 100%;
height: 8px;
background: rgba(0, 0, 0, 0.3);
border: 1px solid rgba(255, 20, 147, 0.15);
border-radius: 4px;
position: relative;
margin: 10px 0;
overflow: hidden;
}
.progress-fill {
height: 100%;
background: linear-gradient(90deg, #FF69B4 0%, #800080 100%);
border-radius: 4px;
transition: width 1s ease-in-out;
box-shadow: 0 0 15px rgba(255, 105, 180, 0.4);
}
.progress-bar.split {
display: flex;
justify-content: center;
background: rgba(0, 0, 0, 0.3);
border: 1px solid rgba(255, 20, 147, 0.15);
overflow: visible;
}
.progress-fill-left {
height: 100%;
position: absolute;
right: 50%;
background: linear-gradient(90deg, #FF69B4 30%, rgba(255, 105, 180, 0.5) 100%);
border-radius: 4px 0 0 4px;
transition: width 0.3s ease-in-out;
}
.progress-fill-right {
height: 100%;
position: absolute;
left: 50%;
background: linear-gradient(90deg, rgba(128, 0, 128, 0.5) 0%, #800080 70%);
border-radius: 0 4px 4px 0;
transition: width 0.3s ease-in-out;
}
.progress-metric.split .progress-bar::before,
.progress-metric.split .progress-bar::after {
content: '';
position: absolute;
width: 2px;
height: 20px;
background: rgba(255, 255, 255, 0.7);
top: 50%;
transform: translateY(-50%);
z-index: 2;
box-shadow: 0 0 8px rgba(255, 255, 255, 0.5);
}
.progress-metric.split .progress-bar::before {
left: 0;
}
.progress-metric.split .progress-bar::after {
right: 0;
}
.progress-metric.split:hover .progress-fill-left {
box-shadow: 0 0 15px rgba(255, 20, 147, 0.5);
}
.progress-metric.split:hover .progress-fill-right {
box-shadow: 0 0 15px rgba(75, 0, 130, 0.5);
}
.progress-metric.split {
padding: 12px 15px;
}
.progress-metric.split .progress-label {
margin-bottom: 8px;
gap: 12px;
}
.progress-metric.split .progress-label span:first-child,
.progress-metric.split .progress-label span:last-child {
flex: 0 0 80px;
font-size: 14px;
}
.progress-metric.split .progress-value {
font-weight: 600;
text-shadow: 0 0 5px rgba(255, 20, 147, 0.3);
font-size: 14px;
min-width: 60px;
text-align: center;
}
.progress-metric:hover .progress-fill-center {
box-shadow: 0 0 15px rgba(255, 20, 147, 0.5);
}
.progress-label {
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 4px;
color: #00FFEE;
font-size: 14px;
}
.progress-metric:not(.split) .progress-label {
gap: 12px;
}
.progress-metric:not(.split) .progress-label span {
flex: 0 0 auto;
}
.progress-metric:not(.split) .progress-value {
color: #FFE1FF;
}
.progress-metric.split .progress-label {
gap: 20px;
}
.progress-metric.split .progress-label span:first-child,
.progress-metric.split .progress-label span:last-child {
flex: 0 0 80px;
}
.progress-metric.split .progress-label span:first-child {
text-align: right;
}
.progress-metric.split .progress-label span:last-child {
text-align: left;
}
.progress-metric.split .progress-value {
color: #FFE1FF;
flex: 0 0 60px;
text-align: center;
}
.progress-metric:hover .progress-fill {
box-shadow: 0 0 15px rgba(255, 20, 147, 0.5);
}
.progress-metric:hover .progress-fill-center {
box-shadow: 0 0 15px rgba(75, 0, 130, 0.5);
}
.info-grid {
display: grid;
grid-template-columns: repeat(3, 1fr);
gap: 15px;
}
.creator-section {
margin: 20px 0;
}
.creator-badge {
display: inline-flex;
align-items: center;
background: rgba(28, 14, 36, 0.95);
border: 1px solid rgba(255, 20, 147, 0.3);
border-radius: 8px;
padding: 10px 15px;
}
.creator-label {
color: #FFE1FF;
font-size: 14px;
margin-right: 8px;
}
.creator-link {
display: flex;
align-items: center;
gap: 5px;
color: #00FFEE;
text-decoration: none;
transition: all 0.3s ease;
}
.creator-name {
font-weight: 600;
}
.creator-arrow {
font-size: 16px;
transition: transform 0.3s ease;
}
.creator-link:hover {
color: #FF1493;
}
.creator-link:hover .creator-arrow {
transform: translateX(3px);
}
.model-info {
margin-top: 30px;
}
.name-legend {
background: rgba(28, 14, 36, 0.95);
border: 1px solid rgba(255, 20, 147, 0.3);
border-radius: 8px;
padding: 20px;
margin: 20px 0;
}
.name-legend h3 {
color: #FF1493;
font-size: 18px;
margin: 0 0 15px 0;
}
.legend-grid {
display: grid;
gap: 12px;
}
.legend-item {
display: flex;
align-items: baseline;
gap: 10px;
}
.legend-key {
color: #00FFEE;
font-weight: 600;
min-width: 80px;
}
.legend-value {
color: #FFE1FF;
}
.model-description {
background: rgba(28, 14, 36, 0.95);
border: 1px solid rgba(255, 20, 147, 0.3);
border-radius: 8px;
padding: 20px;
}
.model-description p {
margin: 0 0 15px 0;
line-height: 1.6;
}
.model-description p:last-child {
margin-bottom: 0;
}
.section-container {
margin: 40px 0;
}
.info-card {
background: rgba(28, 14, 36, 0.95);
border: 1px solid rgba(255, 20, 147, 0.3);
border-radius: 8px;
overflow: hidden;
}
.info-header {
background: rgba(255, 20, 147, 0.1);
padding: 20px;
border-bottom: 1px solid rgba(255, 20, 147, 0.3);
}
.info-header h3 {
color: #FF1493;
margin: 0 0 10px 0;
font-size: 20px;
text-shadow: 0 0 5px rgba(255, 20, 147, 0.3);
}
.model-tags {
display: flex;
gap: 8px;
flex-wrap: wrap;
}
.model-tag {
background: rgba(0, 255, 238, 0.1);
color: #00FFEE;
padding: 4px 8px;
border-radius: 4px;
font-size: 12px;
border: 1px solid rgba(0, 255, 238, 0.2);
}
.model-composition {
padding: 20px;
border-bottom: 1px solid rgba(255, 20, 147, 0.3);
}
.model-composition h4 {
color: #FF1493;
margin: 0 0 15px 0;
font-size: 16px;
}
.composition-list {
list-style: none;
padding: 0;
margin: 0;
display: grid;
gap: 10px;
}
.composition-list li {
color: #FFE1FF;
display: flex;
align-items: baseline;
gap: 8px;
}
.model-component {
color: #00FFEE;
font-weight: 500;
min-width: 120px;
}
.template-card {
background: rgba(28, 14, 36, 0.95);
border: 1px solid rgba(255, 20, 147, 0.3);
border-radius: 8px;
padding: 15px;
}
.template-item {
display: flex;
align-items: center;
gap: 12px;
}
.template-icon {
width: 24px;
height: 24px;
opacity: 0.8;
}
.template-content {
display: flex;
align-items: baseline;
gap: 8px;
}
.template-link {
color: #00FFEE;
text-decoration: none;
font-weight: 500;
display: flex;
align-items: center;
gap: 5px;
}
.template-author {
color: rgba(255, 225, 255, 0.7);
font-size: 14px;
}
.quantized-container {
display: grid;
gap: 20px;
}
.quantized-section {
background: rgba(28, 14, 36, 0.95);
border: 1px solid rgba(255, 20, 147, 0.3);
border-radius: 8px;
padding: 20px;
}
.quantized-section h3 {
color: #FF1493;
font-size: 18px;
margin: 0 0 15px 0;
}
.quantized-items {
display: grid;
gap: 12px;
}
.quantized-item {
display: flex;
align-items: baseline;
gap: 10px;
}
.quantized-item .author {
color: rgba(255, 225, 255, 0.7);
min-width: 100px;
}
.multi-links {
display: flex;
align-items: center;
gap: 8px;
}
.separator {
color: rgba(255, 225, 255, 0.5);
}
.config-container {
background: rgba(28, 14, 36, 0.95);
border: 1px solid rgba(255, 20, 147, 0.3);
border-radius: 8px;
overflow: hidden;
}
.config-header {
background: rgba(255, 20, 147, 0.1);
padding: 15px 20px;
border-bottom: 1px solid rgba(255, 20, 147, 0.3);
}
.model-name {
color: #FF1493;
font-weight: 600;
}
.config-content {
padding: 20px;
}
.config-item {
display: flex;
flex-direction: column;
gap: 5px;
margin-bottom: 15px;
}
.config-label {
color: #00FFEE;
font-size: 14px;
font-weight: 500;
}
.config-value {
color: #FFE1FF;
font-family: 'Courier New', monospace;
}
.config-models {
margin-top: 20px;
}
.model-list {
list-style: none;
padding: 0;
margin: 10px 0 0 0;
}
.model-list li {
color: #FFE1FF;
font-family: 'Courier New', monospace;
padding: 5px 0;
padding-left: 20px;
position: relative;
}
.model-list li::before {
content: '-';
position: absolute;
left: 0;
color: #00FFEE;
}
.link-arrow {
display: inline-block;
transition: transform 0.3s ease;
}
a:hover .link-arrow {
transform: translateX(3px);
}
.benchmark-notification {
background: rgba(255, 20, 147, 0.15);
border: 1px solid rgba(255, 20, 147, 0.3);
border-radius: 8px;
margin-bottom: 20px;
padding: 12px;
animation: glowPulse 2s infinite;
}
.notification-content {
display: flex;
align-items: center;
justify-content: center;
gap: 10px;
text-align: center;
}
.notification-icon {
font-size: 20px;
}
.notification-text {
color: #FFE1FF;
font-size: 16px;
font-weight: 500;
display: flex;
flex-direction: column;
align-items: center;
gap: 5px;
}
.benchmark-link {
color: #00FFEE;
text-decoration: none;
font-size: 14px;
padding: 4px 8px;
border-radius: 4px;
transition: all 0.3s ease;
border: 1px solid rgba(0, 255, 238, 0.3);
}
.benchmark-link:hover {
background: rgba(0, 255, 238, 0.1);
border-color: rgba(0, 255, 238, 0.5);
color: #00FFEE;
text-shadow: 0 0 5px rgba(0, 255, 238, 0.5);
}
@keyframes glowPulse {
0% {
box-shadow: 0 0 5px rgba(255, 20, 147, 0.3);
}
50% {
box-shadow: 0 0 15px rgba(255, 20, 147, 0.5);
}
100% {
box-shadow: 0 0 5px rgba(255, 20, 147, 0.3);
}
}
.review-card {
background: rgba(28, 14, 36, 0.95);
border: 1px solid rgba(255, 20, 147, 0.3);
border-radius: 8px;
padding: 15px;
margin-bottom: 15px;
}
.review-card:last-child {
margin-bottom: 0;
}
</style>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Contrl-Stheno-8B-v1</title>
<link href="https://fonts.googleapis.com/css2?family=Quicksand:wght@400;500;600&display=swap" rel="stylesheet">
<link href="styles.css" rel="stylesheet">
</head>
<body>
<div class="container">
<div class="header">
<h1>Contrl-Stheno-8B-v1</h1>
</div>
<div class="info">
<img src="https://huggingface.co/Darkknight535/Contrl-Stheno-v1-8B/resolve/main/img_.jpg" alt="Model banner">
<div class="creator-section">
<div class="creator-badge">
<span class="creator-label">Created by</span>
<a href="https://huggingface.co/Darkknight535" target="_blank" class="creator-link">
<span class="creator-name">Darkknight535</span>
<span class="creator-arrow">β</span>
</a>
</div>
</div>
<div class="model-info">
<h2>Model Information</h2>
<div class="info-card">
<div class="info-header">
<h3>Contrl-Stheno-8B-v1</h3>
<div class="model-tags">
<span class="model-tag">Stheno = Stheno-v3.2</span>
<span class="model-tag">Contrl = Control-Nanuq</span>
<span class="model-tag">8b Parameters</span>
</div>
</div>
<div class="model-composition">
<h4>Model Composition</h4>
<ul class="composition-list">
<li><span class="model-component"><a href="https://huggingface.co/Delta-Vector/Control-Nanuq-8B" target="_blank">Control Nanuq 8B</a></span> Sweetness and Creativity capabilities</li>
<li><span class="model-component"><a href="https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2" target="_blank">Stheno-3.2 8B</a></span> Roleplay and logic</li>
</ul>
</div>
<div class="model-description">
<p>An Experiment of mine which turned out to be great! It has dialogues I hadn't found even in 70B models.</p>
</div>
</div>
<!--<div class="metrics-section">
<details open>
<summary>User Reviews</summary>
<div class="progress-metrics">
<div>
<div class="review-card">
<div>
<span>[USERNAME]</span>
</div>
<p>[REVIEW]</p>
</div>
<div class="review-card">
<div>
<span>[USERNAME]</span>
</div>
<p>[REVIEW]</p>
</div>
<div class="review-card">
<div>
<span>[USERNAME]</span>
</div>
<p>[REVIEW]</p>
</div>
</div>
</div>
</details>
</div>-->
</div>
<div class="section-container">
<h2>Reccomended Templates & Prompts</h2>
<div class="template-card">
<div class="template-item">
<div class="template-content">
<a href="" target="_blank" class="template-link">
Sao10k's Euryale System Prompt OR EVA System Prompt
<span class="link-arrow">β</span>
</a>
<span class="template-author">by Sao10k and EVA-UNIT-01</span>
</div>
</div>
</div>
</div>
<div class="section-container">
<h2>Quantized Versions</h2>
<div class="quantized-container">
<div class="quantized-section">
<h3>GGUF Quantizations</h3>
<div class="quantized-items">
<div class="quantized-item">
<span class="author">mradermacher</span>
<a href="https://huggingface.co/mradermacher/Contrl-Stheno-v1-8B-GGUF" target="_blank">
STATIC-GGUF
<span class="link-arrow">β</span>
</a>
</div>
</div>
</div>
<div class="quantized-section">
<h3>Imat GGUF Quantizations</h3>
<div class="quantized-items">
<div class="quantized-item">
<span class="author">mradermacher</span>
<a href="https://huggingface.co/mradermacher/Contrl-Stheno-v1-8B-i1-GGUF" target="_blank">
IMAT-GGUF
<span class="link-arrow">β</span>
</a>
</div>
</div>
</div>
</div>
</div>
<div class="support-section">
<h2>Thanks to these people (I just made a script and Stole SteelSkull's Readme Template)</h2>
<div class="support-buttons">
<a href="https://huggingface.co/Sao10k" target="_blank" class="button">
Support Sao10K
</a>
<a href="https://huggingface.co/Delta-Vector" target="_blank" class="button">
Support Delta-Vector
</a>
<a href="https://huggingface.co/Steelskull" target="_blank" class="button">
Support SteelSkull
</a>
</div>
</div>
</div>
</div>
</body>
</html> |
banhkeomath2/sound | banhkeomath2 | 2025-05-26T09:20:07Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-03-07T05:40:54Z | ---
license: apache-2.0
---
|
dimasik87/492f5d86-02df-4ab1-809e-25ff65e925e5 | dimasik87 | 2025-05-26T09:19:31Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"base_model:quantized:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2025-05-26T08:33:01Z | ---
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
library_name: transformers
model_name: 492f5d86-02df-4ab1-809e-25ff65e925e5
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for 492f5d86-02df-4ab1-809e-25ff65e925e5
This model is a fine-tuned version of [Orenguteng/Llama-3-8B-Lexi-Uncensored](https://huggingface.co/Orenguteng/Llama-3-8B-Lexi-Uncensored).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dimasik87/492f5d86-02df-4ab1-809e-25ff65e925e5", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-7/runs/b2djyfkn)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
GAYOEN/find-bench | GAYOEN | 2025-05-26T09:19:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-05-26T09:16:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Yogi99/Aadi9670 | Yogi99 | 2025-05-26T09:19:09Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-26T09:19:09Z | ---
license: apache-2.0
---
|
mradermacher/Granite-3.1-Earthen-v0.3-3B-A800M-i1-GGUF | mradermacher | 2025-05-26T09:18:44Z | 0 | 1 | transformers | [
"transformers",
"gguf",
"en",
"dataset:BeaverAI/REDACTED1",
"dataset:BeaverAI/REDACTED2",
"dataset:BeaverAI/REDACTED3",
"dataset:BeaverAI/REDACTED4",
"dataset:BeaverAI/REDACTED5",
"dataset:BeaverAI/REDACTED6",
"dataset:PJMixers-Dev/Lit-axo-Shuffled",
"dataset:PJMixers-Dev/Mielikki_Erebus-87k-axo",
"dataset:PJMixers/RyokoAI_Honeyfeed3600-Cleanish",
"dataset:PJMixers-Dev/allura-org_fujin-cleaned-stage-2-axo",
"dataset:Nelathan/synthetic-sugar-quill",
"dataset:PJMixers-Dev/winglian_visual-novels-json-axo-dropped-long",
"dataset:PJMixers-Dev/recursal_SCP-RECURSAL-Cleaned",
"dataset:PJMixers-Dev/Subtitles",
"dataset:PJMixers-Dev/KaraKaraWitch_AnimeSubtitle-axo",
"dataset:PJMixers/AP-News-2024",
"dataset:PJMixers-Dev/Fundus-AP-News-Formatted",
"dataset:PJMixers-Dev/Fundus-AP-News-2-Formatted",
"dataset:PJMixers-Dev/goodwiki-2024-12-04-axo",
"dataset:epfl-llm/guidelines",
"dataset:PJMixers-Dev/allenai_tulu-3-sft-mixture-filtered-2-ShareGPT",
"dataset:OpenLeecher/lmsys_chat_1m_clean",
"dataset:PJMixers-Dev/Gryphe-Aesir-RPG-Charcards-Opus-Mixed",
"dataset:allura-org/gryphe-sonnet-3.5-charcards-names-added",
"dataset:anthracite-org/c2_logs_32k_llama3_qwen2_v1.3",
"dataset:PJMixers-Dev/MinervaAI_Aesir-Preview-Anon",
"dataset:PJMixers-Dev/lemonilia_LimaRP-Simple-CustomShareGPT-Shuffled",
"dataset:Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned",
"dataset:PJMixers-Dev/NyxKrage_chub-logs-sharegpt-longest-CustomShareGPT",
"dataset:PJMixers/OpenLeecher_Teatime_all_logs_longest-ShareGPT",
"dataset:grimulkan/aicg-logs-augmented",
"dataset:grimulkan/PIPPA-augmented-dedup",
"dataset:PJMixers/grimulkan_bluemoon_Karen_cleaned-carded-formatted",
"dataset:PJMixers/lodrick-the-lafted_OpusStories-ShareGPT",
"dataset:Gryphe/ChatGPT-4o-Writing-Prompts",
"dataset:Gryphe/Opus-WritingPrompts",
"dataset:anthracite-org/nopm_claude_writing_fixed",
"dataset:PJMixers-Dev/Tiefighter-13B-Fake-Distill-ShareGPT",
"dataset:allura-org/fujin-instruct-v2",
"dataset:ToastyPigeon/gutenberg-sft",
"dataset:PocketDoc/Dans-Prosemaxx-Adventure",
"dataset:PocketDoc/Dans-Failuremaxx-Adventure-3",
"dataset:TheDrummer/AmoralQA-v2",
"base_model:PJMixers-Dev/Granite-3.1-Earthen-v0.3-3B-A800M",
"base_model:quantized:PJMixers-Dev/Granite-3.1-Earthen-v0.3-3B-A800M",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
]
| null | 2025-05-25T20:42:33Z | ---
base_model: PJMixers-Dev/Granite-3.1-Earthen-v0.3-3B-A800M
datasets:
- BeaverAI/REDACTED1
- BeaverAI/REDACTED2
- BeaverAI/REDACTED3
- BeaverAI/REDACTED4
- BeaverAI/REDACTED5
- BeaverAI/REDACTED6
- PJMixers-Dev/Lit-axo-Shuffled
- PJMixers-Dev/Mielikki_Erebus-87k-axo
- PJMixers/RyokoAI_Honeyfeed3600-Cleanish
- PJMixers-Dev/allura-org_fujin-cleaned-stage-2-axo
- Nelathan/synthetic-sugar-quill
- PJMixers-Dev/winglian_visual-novels-json-axo-dropped-long
- PJMixers-Dev/recursal_SCP-RECURSAL-Cleaned
- PJMixers-Dev/Subtitles
- PJMixers-Dev/KaraKaraWitch_AnimeSubtitle-axo
- PJMixers/AP-News-2024
- PJMixers-Dev/Fundus-AP-News-Formatted
- PJMixers-Dev/Fundus-AP-News-2-Formatted
- PJMixers-Dev/goodwiki-2024-12-04-axo
- epfl-llm/guidelines
- PJMixers-Dev/allenai_tulu-3-sft-mixture-filtered-2-ShareGPT
- OpenLeecher/lmsys_chat_1m_clean
- PJMixers-Dev/Gryphe-Aesir-RPG-Charcards-Opus-Mixed
- allura-org/gryphe-sonnet-3.5-charcards-names-added
- anthracite-org/c2_logs_32k_llama3_qwen2_v1.3
- PJMixers-Dev/MinervaAI_Aesir-Preview-Anon
- PJMixers-Dev/lemonilia_LimaRP-Simple-CustomShareGPT-Shuffled
- Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
- PJMixers-Dev/NyxKrage_chub-logs-sharegpt-longest-CustomShareGPT
- PJMixers/OpenLeecher_Teatime_all_logs_longest-ShareGPT
- grimulkan/aicg-logs-augmented
- grimulkan/PIPPA-augmented-dedup
- PJMixers/grimulkan_bluemoon_Karen_cleaned-carded-formatted
- PJMixers/lodrick-the-lafted_OpusStories-ShareGPT
- Gryphe/ChatGPT-4o-Writing-Prompts
- Gryphe/Opus-WritingPrompts
- anthracite-org/nopm_claude_writing_fixed
- PJMixers-Dev/Tiefighter-13B-Fake-Distill-ShareGPT
- allura-org/fujin-instruct-v2
- ToastyPigeon/gutenberg-sft
- PocketDoc/Dans-Prosemaxx-Adventure
- PocketDoc/Dans-Failuremaxx-Adventure-3
- TheDrummer/AmoralQA-v2
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/PJMixers-Dev/Granite-3.1-Earthen-v0.3-3B-A800M
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Granite-3.1-Earthen-v0.3-3B-A800M-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Granite-3.1-Earthen-v0.3-3B-A800M-i1-GGUF/resolve/main/Granite-3.1-Earthen-v0.3-3B-A800M.i1-Q2_K.gguf) | i1-Q2_K | 1.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Granite-3.1-Earthen-v0.3-3B-A800M-i1-GGUF/resolve/main/Granite-3.1-Earthen-v0.3-3B-A800M.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Granite-3.1-Earthen-v0.3-3B-A800M-i1-GGUF/resolve/main/Granite-3.1-Earthen-v0.3-3B-A800M.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Granite-3.1-Earthen-v0.3-3B-A800M-i1-GGUF/resolve/main/Granite-3.1-Earthen-v0.3-3B-A800M.i1-IQ3_S.gguf) | i1-IQ3_S | 1.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Granite-3.1-Earthen-v0.3-3B-A800M-i1-GGUF/resolve/main/Granite-3.1-Earthen-v0.3-3B-A800M.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Granite-3.1-Earthen-v0.3-3B-A800M-i1-GGUF/resolve/main/Granite-3.1-Earthen-v0.3-3B-A800M.i1-IQ3_M.gguf) | i1-IQ3_M | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Granite-3.1-Earthen-v0.3-3B-A800M-i1-GGUF/resolve/main/Granite-3.1-Earthen-v0.3-3B-A800M.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.7 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Granite-3.1-Earthen-v0.3-3B-A800M-i1-GGUF/resolve/main/Granite-3.1-Earthen-v0.3-3B-A800M.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Granite-3.1-Earthen-v0.3-3B-A800M-i1-GGUF/resolve/main/Granite-3.1-Earthen-v0.3-3B-A800M.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Granite-3.1-Earthen-v0.3-3B-A800M-i1-GGUF/resolve/main/Granite-3.1-Earthen-v0.3-3B-A800M.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.0 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Granite-3.1-Earthen-v0.3-3B-A800M-i1-GGUF/resolve/main/Granite-3.1-Earthen-v0.3-3B-A800M.i1-Q4_0.gguf) | i1-Q4_0 | 2.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Granite-3.1-Earthen-v0.3-3B-A800M-i1-GGUF/resolve/main/Granite-3.1-Earthen-v0.3-3B-A800M.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Granite-3.1-Earthen-v0.3-3B-A800M-i1-GGUF/resolve/main/Granite-3.1-Earthen-v0.3-3B-A800M.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Granite-3.1-Earthen-v0.3-3B-A800M-i1-GGUF/resolve/main/Granite-3.1-Earthen-v0.3-3B-A800M.i1-Q4_1.gguf) | i1-Q4_1 | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Granite-3.1-Earthen-v0.3-3B-A800M-i1-GGUF/resolve/main/Granite-3.1-Earthen-v0.3-3B-A800M.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Granite-3.1-Earthen-v0.3-3B-A800M-i1-GGUF/resolve/main/Granite-3.1-Earthen-v0.3-3B-A800M.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Granite-3.1-Earthen-v0.3-3B-A800M-i1-GGUF/resolve/main/Granite-3.1-Earthen-v0.3-3B-A800M.i1-Q6_K.gguf) | i1-Q6_K | 2.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
phospho-app/jmota27-gr00t-boat_cup_dataset-npbv8 | phospho-app | 2025-05-26T09:15:29Z | 0 | 0 | null | [
"safetensors",
"gr00t_n1",
"phosphobot",
"gr00t",
"region:us"
]
| null | 2025-05-26T08:49:06Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [jmota27/boat_cup_dataset](https://huggingface.co/datasets/jmota27/boat_cup_dataset)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 49
- **Training steps**: None
π **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
π€ **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
wuxia196/dqn-SpaceInvadersNoFrameskip-v4 | wuxia196 | 2025-05-26T09:15:21Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-05-26T07:43:10Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 488.00 +/- 141.16
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga wuxia196 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga wuxia196 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga wuxia196
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('buffer_size', 200000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0002),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Hyaline/Domaino1s-finance | Hyaline | 2025-05-26T09:13:41Z | 1 | 0 | null | [
"safetensors",
"qwen2",
"arxiv:2501.14431",
"license:apache-2.0",
"region:us"
]
| null | 2025-01-19T01:32:02Z | ---
license: apache-2.0
---
This repository stores the model parameters for our paper [Domaino1s: Guiding LLM Reasoning for Explainable Answers in High-Stakes Domains](https://arxiv.org/abs/2501.14431).
Our paper is now accepted as findings of **ACL 2025**.
More details: [Domaino1s](https://github.com/Hyalinesky/Domaino1s)
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{chu2025domaino1s,
title={Domaino1s: Guiding LLM Reasoning for Explainable Answers in High-Stakes Domains},
author={Chu, Xu and Tan, Zhijie and Xue, Hanlin and Wang, Guanyu and Mo, Tong and Li, Weiping},
journal={arXiv preprint arXiv:2501.14431},
year={2025}
}
``` |
ashani/ppo-SnowballTarget | ashani | 2025-05-26T09:09:56Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2025-05-26T09:09:52Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog πΆ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ashani/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
|
ksun1ch/Salesforce-codet5-base-nl2vis-for-nl2sql | ksun1ch | 2025-05-26T09:08:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2025-05-26T08:49:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ahmedelgebaly/llama-3.1-8b-Squad_SciQ_HotpotQA_Equal_E2 | ahmedelgebaly | 2025-05-26T09:07:12Z | 16 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"license:llama3",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-05-13T14:02:31Z | ---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: llama-3.1-8b-Squad_SciQ_HotpotQA_Equal_E2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: meta-llama/Meta-Llama-3.1-8B
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: ahmedelgebaly/SQuad_SciQ_HotpotQA_Alpaca_Equal
type: alpaca
split: train
test_datasets:
- path: ahmedelgebaly/SQuad_SciQ_HotpotQA_Alpaca_Equal
type: alpaca
split: validation
dataset_prepared_path:
output_dir: ./outputs/qlora-out
adapter: qlora
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 64 #Before it was 16
lora_dropout: 0.05
lora_target_modules: #Before it was empty
- q_proj
- k_proj
- v_proj
- o_proj
- gate_proj
- up_proj
- down_proj
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: llama-3.1-8b-Squad_SciQ_HotpotQA_Equal_E2
wandb_entity:
wandb_watch:
wandb_name: llama-3.1-8b-Squad_SciQ_HotpotQA_Equal_E2
wandb_log_model:
hub_model_id: ahmedelgebaly/llama-3.1-8b-Squad_SciQ_HotpotQA_Equal_E2
gradient_accumulation_steps: 4
micro_batch_size: 4
num_epochs: 2
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: true #Before it was false
bf16: auto
tf32: false
gradient_checkpointing: true
flash_attention: true
warmup_steps: 50 #Before it was 10
evals_per_epoch: 4
saves_per_epoch: 1
weight_decay: 0.0
special_tokens:
pad_token: "<|end_of_text|>"
```
</details><br>
# llama-3.1-8b-Squad_SciQ_HotpotQA_Equal_E2
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8821
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0036 | 1 | 1.7171 |
| 0.871 | 0.2527 | 70 | 0.9221 |
| 0.8191 | 0.5054 | 140 | 0.8980 |
| 0.8102 | 0.7581 | 210 | 0.8845 |
| 0.6998 | 1.0081 | 280 | 0.8708 |
| 0.6609 | 1.2608 | 350 | 0.8868 |
| 0.661 | 1.5135 | 420 | 0.8825 |
| 0.6388 | 1.7662 | 490 | 0.8821 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.45.2
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ahmedelgebaly/llama-3.1-8b-Squad_SciQ_HotpotQA_Equal_E3 | ahmedelgebaly | 2025-05-26T09:06:51Z | 24 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"license:llama3",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-05-13T14:02:32Z | ---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: llama-3.1-8b-Squad_SciQ_HotpotQA_Equal_E3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: meta-llama/Meta-Llama-3.1-8B
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: ahmedelgebaly/SQuad_SciQ_HotpotQA_Alpaca_Equal
type: alpaca
split: train
test_datasets:
- path: ahmedelgebaly/SQuad_SciQ_HotpotQA_Alpaca_Equal
type: alpaca
split: validation
dataset_prepared_path:
output_dir: ./outputs/qlora-out
adapter: qlora
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 64 #Before it was 16
lora_dropout: 0.05
lora_target_modules: #Before it was empty
- q_proj
- k_proj
- v_proj
- o_proj
- gate_proj
- up_proj
- down_proj
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: llama-3.1-8b-Squad_SciQ_HotpotQA_Equal_E3
wandb_entity:
wandb_watch:
wandb_name: llama-3.1-8b-Squad_SciQ_HotpotQA_Equal_E3
wandb_log_model:
hub_model_id: ahmedelgebaly/llama-3.1-8b-Squad_SciQ_HotpotQA_Equal_E3
gradient_accumulation_steps: 4
micro_batch_size: 4
num_epochs: 3
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: true #Before it was false
bf16: auto
tf32: false
gradient_checkpointing: true
flash_attention: true
warmup_steps: 50 #Before it was 10
evals_per_epoch: 4
saves_per_epoch: 1
weight_decay: 0.0
special_tokens:
pad_token: "<|end_of_text|>"
```
</details><br>
# llama-3.1-8b-Squad_SciQ_HotpotQA_Equal_E3
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9982
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0036 | 1 | 1.7171 |
| 0.8618 | 0.2527 | 70 | 0.9238 |
| 0.801 | 0.5054 | 140 | 0.8990 |
| 0.8208 | 0.7581 | 210 | 0.8868 |
| 0.7207 | 1.0081 | 280 | 0.8757 |
| 0.6826 | 1.2608 | 350 | 0.8944 |
| 0.6631 | 1.5135 | 420 | 0.8995 |
| 0.6935 | 1.7662 | 490 | 0.8873 |
| 0.4885 | 2.0171 | 560 | 0.9172 |
| 0.4611 | 2.2699 | 630 | 0.9876 |
| 0.4425 | 2.5226 | 700 | 0.9970 |
| 0.4339 | 2.7753 | 770 | 0.9982 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.45.2
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
sp-embraceable/Phi4-FT-unsloth-runpod-2500steps-e1-above90-adapter | sp-embraceable | 2025-05-26T09:05:11Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/phi-4",
"base_model:adapter:unsloth/phi-4",
"region:us"
]
| null | 2025-05-26T09:01:31Z | ---
base_model: unsloth/Phi-4
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
yukieos/gemma-2-2b-it-lora | yukieos | 2025-05-26T09:04:56Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:google/gemma-2-2b-it",
"base_model:adapter:google/gemma-2-2b-it",
"license:gemma",
"region:us"
]
| null | 2025-05-26T07:56:01Z | ---
library_name: peft
license: gemma
base_model: google/gemma-2-2b-it
tags:
- generated_from_trainer
model-index:
- name: gemma-2-2b-it-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma-2-2b-it-lora
This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1 |
ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_e4 | ahmedelgebaly | 2025-05-26T09:04:54Z | 13 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"license:llama3",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-05-23T13:17:58Z | ---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: llama-3.1-8b-squadv2_SciQ_e4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: meta-llama/Meta-Llama-3.1-8B # same model you originally used
# Load your previously fine-tuned model as a PEFT adapter
peft_model: ahmedelgebaly/llama-3.1-8b-squadv2_e4
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: ahmedelgebaly/SciQ_Alpaca
type: alpaca
split: train
test_datasets:
- path: ahmedelgebaly/SciQ_Alpaca
type: alpaca
split: validation
dataset_prepared_path:
output_dir: ./outputs/qlora-out
adapter: qlora
lora_model_dir:
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: llama-3.1-8b-squadv2_SciQ_e4
wandb_entity:
wandb_watch:
wandb_name: llama-3.1-8b-squadv2_SciQ_e4
wandb_log_model:
hub_model_id: ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_e4
gradient_accumulation_steps: 4
micro_batch_size: 4
num_epochs: 4
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
pad_token: "<|end_of_text|>"
```
</details><br>
# llama-3.1-8b-squadv2_SciQ_e4
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8895
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7866 | 0.0305 | 1 | 1.8420 |
| 1.1285 | 0.2443 | 8 | 1.0966 |
| 0.841 | 0.4885 | 16 | 0.9650 |
| 0.8667 | 0.7328 | 24 | 0.9336 |
| 0.8586 | 0.9771 | 32 | 0.9192 |
| 0.8353 | 1.2137 | 40 | 0.9084 |
| 0.8003 | 1.4580 | 48 | 0.9014 |
| 0.8299 | 1.7023 | 56 | 0.8971 |
| 0.8033 | 1.9466 | 64 | 0.8940 |
| 0.81 | 2.1832 | 72 | 0.8922 |
| 0.81 | 2.4275 | 80 | 0.8918 |
| 0.7858 | 2.6718 | 88 | 0.8887 |
| 0.7595 | 2.9160 | 96 | 0.8882 |
| 0.7574 | 3.1527 | 104 | 0.8884 |
| 0.7754 | 3.3969 | 112 | 0.8896 |
| 0.7671 | 3.6412 | 120 | 0.8894 |
| 0.7495 | 3.8855 | 128 | 0.8895 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.45.2
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Datasmartly/nllb-tamazight-finetunedmixe1 | Datasmartly | 2025-05-26T09:04:30Z | 0 | 0 | null | [
"safetensors",
"m2m_100",
"generated_from_trainer",
"base_model:facebook/nllb-200-3.3B",
"base_model:finetune:facebook/nllb-200-3.3B",
"license:cc-by-nc-4.0",
"region:us"
]
| null | 2025-05-26T08:49:01Z | ---
license: cc-by-nc-4.0
base_model: facebook/nllb-200-3.3B
tags:
- generated_from_trainer
model-index:
- name: nllb-tamazight-finetunedmixe1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nllb-tamazight-finetunedmixe1
This model is a fine-tuned version of [facebook/nllb-200-3.3B](https://huggingface.co/facebook/nllb-200-3.3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2100
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0406 | 1.0 | 225 | 0.7700 |
| 0.1517 | 2.0 | 450 | 0.1944 |
| 0.0553 | 3.0 | 675 | 0.2100 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.4.1+cu124
- Datasets 3.6.0
- Tokenizers 0.15.2
|
ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_E5_V2 | ahmedelgebaly | 2025-05-26T09:04:29Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"license:llama3",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-05-22T22:43:46Z | ---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: llama-3.1-8b-squadv2_SciQ_E5_V2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: meta-llama/Meta-Llama-3-8B
lora_model_dir: ahmedelgebaly/llama-3.1-8b-squadv2_E5_V2
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: ahmedelgebaly/SciQ_Alpaca
type: alpaca
split: train
- path: ahmedelgebaly/SQuad_2_Alpaca
type: alpaca
split: train
percentage: 0.1 # small replay buffer to avoid forgetting
test_datasets:
- path: ahmedelgebaly/SciQ_Alpaca
type: alpaca
split: validation
dataset_prepared_path:
output_dir: ./outputs/qlora-out
adapter: qlora
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 64 #Before it was 16
lora_dropout: 0.05
lora_target_modules: #Before it was empty
- q_proj
- k_proj
- v_proj
- o_proj
- gate_proj
- up_proj
- down_proj
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: llama-3.1-8b-squadv2_SciQ_e5_v2
wandb_entity:
wandb_watch:
wandb_name: llama-3.1-8b-squadv2-v0_SciQ_e5_v2
wandb_log_model:
hub_model_id: ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_E5_V2
gradient_accumulation_steps: 4
micro_batch_size: 4
num_epochs: 5
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: true #Before it was false
bf16: auto
tf32: false
gradient_checkpointing: true
flash_attention: true
warmup_steps: 50 #Before it was 10
evals_per_epoch: 4
saves_per_epoch: 1
weight_decay: 0.0
special_tokens:
pad_token: "<|end_of_text|>"
```
</details><br>
# llama-3.1-8b-squadv2_SciQ_E5_V2
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0029 | 1 | 3.2812 |
| 0.5424 | 0.2504 | 85 | 0.9365 |
| 0.5614 | 0.5007 | 170 | 0.9119 |
| 0.5595 | 0.7511 | 255 | 0.9003 |
| 0.5602 | 1.0015 | 340 | 0.8944 |
| 0.3603 | 1.2496 | 425 | 0.9270 |
| 0.3926 | 1.5 | 510 | 0.9218 |
| 0.4007 | 1.7504 | 595 | 0.9171 |
| 0.3932 | 2.0007 | 680 | 0.9158 |
| 0.2447 | 2.2489 | 765 | 1.0126 |
| 0.2629 | 2.4993 | 850 | 0.9968 |
| 0.2566 | 2.7496 | 935 | 1.0126 |
| 0.2551 | 3.0 | 1020 | 1.0091 |
| 0.1665 | 3.2482 | 1105 | 1.1822 |
| 0.1683 | 3.4985 | 1190 | 1.1927 |
| 0.1717 | 3.7489 | 1275 | 1.1943 |
| 0.1718 | 3.9993 | 1360 | 1.1832 |
| 0.1133 | 4.2474 | 1445 | 1.3833 |
| 0.1158 | 4.4978 | 1530 | 1.3814 |
| 0.1175 | 4.7482 | 1615 | 1.3837 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.45.2
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ai-forever/FRIDA | ai-forever | 2025-05-26T09:04:19Z | 13,940 | 54 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"t5",
"mteb",
"transformers",
"feature-extraction",
"ru",
"en",
"dataset:ai-forever/solyanka",
"arxiv:2309.10931",
"arxiv:2408.12503",
"base_model:ai-forever/FRED-T5-1.7B",
"base_model:finetune:ai-forever/FRED-T5-1.7B",
"license:mit",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2024-12-26T15:07:35Z | ---
model-index:
- name: FRIDA
results:
- dataset:
config: default
name: MTEB CEDRClassification (default)
revision: c0ba03d058e3e1b2f3fd20518875a4563dd12db4
split: test
type: ai-forever/cedr-classification
metrics:
- type: accuracy
value: 64.60148777895856
- type: f1
value: 70.36630348039266
- type: lrap
value: 92.47290116896953
- type: main_score
value: 64.60148777895856
task:
type: MultilabelClassification
- dataset:
config: default
name: MTEB GeoreviewClassification (default)
revision: 3765c0d1de6b7d264bc459433c45e5a75513839c
split: test
type: ai-forever/georeview-classification
metrics:
- type: accuracy
value: 57.70996093750001
- type: f1
value: 53.18542982057098
- type: f1_weighted
value: 53.17663229582108
- type: main_score
value: 57.70996093750001
task:
type: Classification
- dataset:
config: default
name: MTEB GeoreviewClusteringP2P (default)
revision: 97a313c8fc85b47f13f33e7e9a95c1ad888c7fec
split: test
type: ai-forever/georeview-clustering-p2p
metrics:
- type: main_score
value: 78.25468393043356
- type: v_measure
value: 78.25468393043356
- type: v_measure_std
value: 0.5094366871364238
task:
type: Clustering
- dataset:
config: default
name: MTEB HeadlineClassification (default)
revision: 2fe05ee6b5832cda29f2ef7aaad7b7fe6a3609eb
split: test
type: ai-forever/headline-classification
metrics:
- type: accuracy
value: 89.0185546875
- type: f1
value: 88.993933120612
- type: f1_weighted
value: 88.99276764225768
- type: main_score
value: 89.0185546875
task:
type: Classification
- dataset:
config: default
name: MTEB InappropriatenessClassification (default)
revision: 601651fdc45ef243751676e62dd7a19f491c0285
split: test
type: ai-forever/inappropriateness-classification
metrics:
- type: accuracy
value: 78.330078125
- type: ap
value: 73.17856750532495
- type: ap_weighted
value: 73.17856750532495
- type: f1
value: 78.20169867599041
- type: f1_weighted
value: 78.20169867599041
- type: main_score
value: 78.330078125
task:
type: Classification
- dataset:
config: default
name: MTEB KinopoiskClassification (default)
revision: 5911f26666ac11af46cb9c6849d0dc80a378af24
split: test
type: ai-forever/kinopoisk-sentiment-classification
metrics:
- type: accuracy
value: 70.46666666666665
- type: f1
value: 65.83951766538878
- type: f1_weighted
value: 65.83951766538878
- type: main_score
value: 70.46666666666665
task:
type: Classification
- dataset:
config: ru
name: MTEB MIRACLReranking (ru)
revision: 6d1962c527217f8927fca80f890f14f36b2802af
split: dev
type: miracl/mmteb-miracl-reranking
metrics:
- type: MAP@1(MIRACL)
value: 39.023
- type: MAP@10(MIRACL)
value: 60.208
- type: MAP@100(MIRACL)
value: 61.672000000000004
- type: MAP@1000(MIRACL)
value: 61.672000000000004
- type: MAP@20(MIRACL)
value: 61.30799999999999
- type: MAP@3(MIRACL)
value: 53.33
- type: MAP@5(MIRACL)
value: 57.289
- type: NDCG@1(MIRACL)
value: 63.352
- type: NDCG@10(MIRACL)
value: 66.042
- type: NDCG@100(MIRACL)
value: 68.702
- type: NDCG@1000(MIRACL)
value: 68.702
- type: NDCG@20(MIRACL)
value: 67.768
- type: NDCG@3(MIRACL)
value: 61.925
- type: NDCG@5(MIRACL)
value: 63.327
- type: P@1(MIRACL)
value: 63.352
- type: P@10(MIRACL)
value: 16.512
- type: P@100(MIRACL)
value: 1.9529999999999998
- type: P@1000(MIRACL)
value: 0.19499999999999998
- type: P@20(MIRACL)
value: 9.13
- type: P@3(MIRACL)
value: 37.878
- type: P@5(MIRACL)
value: 27.586
- type: Recall@1(MIRACL)
value: 39.023
- type: Recall@10(MIRACL)
value: 72.35000000000001
- type: Recall@100(MIRACL)
value: 79.952
- type: Recall@1000(MIRACL)
value: 79.952
- type: Recall@20(MIRACL)
value: 76.828
- type: Recall@3(MIRACL)
value: 57.769999999999996
- type: Recall@5(MIRACL)
value: 64.91900000000001
- type: main_score
value: 66.042
- type: nAUC_MAP@1000_diff1(MIRACL)
value: 27.150388833033052
- type: nAUC_MAP@1000_max(MIRACL)
value: 55.15672274267081
- type: nAUC_MAP@1000_std(MIRACL)
value: 30.088939934575553
- type: nAUC_MAP@100_diff1(MIRACL)
value: 27.150388833033052
- type: nAUC_MAP@100_max(MIRACL)
value: 55.15672274267081
- type: nAUC_MAP@100_std(MIRACL)
value: 30.088939934575553
- type: nAUC_MAP@10_diff1(MIRACL)
value: 27.853691773641742
- type: nAUC_MAP@10_max(MIRACL)
value: 52.89390350055654
- type: nAUC_MAP@10_std(MIRACL)
value: 28.08732516551691
- type: nAUC_MAP@1_diff1(MIRACL)
value: 43.23179150244192
- type: nAUC_MAP@1_max(MIRACL)
value: 29.923943954188864
- type: nAUC_MAP@1_std(MIRACL)
value: 7.447084370195121
- type: nAUC_MAP@20_diff1(MIRACL)
value: 27.328384072311675
- type: nAUC_MAP@20_max(MIRACL)
value: 54.60286379835721
- type: nAUC_MAP@20_std(MIRACL)
value: 29.8084128980043
- type: nAUC_MAP@3_diff1(MIRACL)
value: 31.244971536944554
- type: nAUC_MAP@3_max(MIRACL)
value: 43.63984692803854
- type: nAUC_MAP@3_std(MIRACL)
value: 18.609234683765887
- type: nAUC_MAP@5_diff1(MIRACL)
value: 29.088760492638286
- type: nAUC_MAP@5_max(MIRACL)
value: 48.30474364461509
- type: nAUC_MAP@5_std(MIRACL)
value: 23.817514353844224
- type: nAUC_NDCG@1000_diff1(MIRACL)
value: 23.12754356408408
- type: nAUC_NDCG@1000_max(MIRACL)
value: 64.24894553363303
- type: nAUC_NDCG@1000_std(MIRACL)
value: 38.19318050598967
- type: nAUC_NDCG@100_diff1(MIRACL)
value: 23.12754356408408
- type: nAUC_NDCG@100_max(MIRACL)
value: 64.24894553363303
- type: nAUC_NDCG@100_std(MIRACL)
value: 38.19318050598967
- type: nAUC_NDCG@10_diff1(MIRACL)
value: 24.779856373697275
- type: nAUC_NDCG@10_max(MIRACL)
value: 60.4054459738118
- type: nAUC_NDCG@10_std(MIRACL)
value: 35.148950441182784
- type: nAUC_NDCG@1_diff1(MIRACL)
value: 35.605865569438556
- type: nAUC_NDCG@1_max(MIRACL)
value: 65.77787399715454
- type: nAUC_NDCG@1_std(MIRACL)
value: 34.34726892885082
- type: nAUC_NDCG@20_diff1(MIRACL)
value: 23.71231783125691
- type: nAUC_NDCG@20_max(MIRACL)
value: 62.89676599488004
- type: nAUC_NDCG@20_std(MIRACL)
value: 37.697052941884316
- type: nAUC_NDCG@3_diff1(MIRACL)
value: 26.109027741640865
- type: nAUC_NDCG@3_max(MIRACL)
value: 56.22356793638693
- type: nAUC_NDCG@3_std(MIRACL)
value: 29.9437568508688
- type: nAUC_NDCG@5_diff1(MIRACL)
value: 25.98644715327336
- type: nAUC_NDCG@5_max(MIRACL)
value: 56.25032008404774
- type: nAUC_NDCG@5_std(MIRACL)
value: 31.581899860862578
- type: nAUC_P@1000_diff1(MIRACL)
value: -18.29912787064644
- type: nAUC_P@1000_max(MIRACL)
value: 31.811344878776087
- type: nAUC_P@1000_std(MIRACL)
value: 30.163820183304914
- type: nAUC_P@100_diff1(MIRACL)
value: -18.299127870646405
- type: nAUC_P@100_max(MIRACL)
value: 31.811344878776133
- type: nAUC_P@100_std(MIRACL)
value: 30.163820183304956
- type: nAUC_P@10_diff1(MIRACL)
value: -15.96416268531149
- type: nAUC_P@10_max(MIRACL)
value: 36.989578896466526
- type: nAUC_P@10_std(MIRACL)
value: 34.54507111688143
- type: nAUC_P@1_diff1(MIRACL)
value: 35.605865569438556
- type: nAUC_P@1_max(MIRACL)
value: 65.77787399715454
- type: nAUC_P@1_std(MIRACL)
value: 34.34726892885082
- type: nAUC_P@20_diff1(MIRACL)
value: -17.443963421383287
- type: nAUC_P@20_max(MIRACL)
value: 34.309618168778385
- type: nAUC_P@20_std(MIRACL)
value: 33.38820956485373
- type: nAUC_P@3_diff1(MIRACL)
value: -8.533621861815652
- type: nAUC_P@3_max(MIRACL)
value: 45.90408386776497
- type: nAUC_P@3_std(MIRACL)
value: 34.50459351305535
- type: nAUC_P@5_diff1(MIRACL)
value: -13.207968899314865
- type: nAUC_P@5_max(MIRACL)
value: 40.37718282248973
- type: nAUC_P@5_std(MIRACL)
value: 35.601417332196206
- type: nAUC_Recall@1000_diff1(MIRACL)
value: 7.907304198177226
- type: nAUC_Recall@1000_max(MIRACL)
value: 77.82197832361145
- type: nAUC_Recall@1000_std(MIRACL)
value: 52.66957487246724
- type: nAUC_Recall@100_diff1(MIRACL)
value: 7.907304198177226
- type: nAUC_Recall@100_max(MIRACL)
value: 77.82197832361145
- type: nAUC_Recall@100_std(MIRACL)
value: 52.66957487246724
- type: nAUC_Recall@10_diff1(MIRACL)
value: 15.498121023488693
- type: nAUC_Recall@10_max(MIRACL)
value: 62.24320529338724
- type: nAUC_Recall@10_std(MIRACL)
value: 40.60221460946224
- type: nAUC_Recall@1_diff1(MIRACL)
value: 43.23179150244192
- type: nAUC_Recall@1_max(MIRACL)
value: 29.923943954188864
- type: nAUC_Recall@1_std(MIRACL)
value: 7.447084370195121
- type: nAUC_Recall@20_diff1(MIRACL)
value: 11.457044176116248
- type: nAUC_Recall@20_max(MIRACL)
value: 70.3493054342368
- type: nAUC_Recall@20_std(MIRACL)
value: 49.27124296325928
- type: nAUC_Recall@3_diff1(MIRACL)
value: 25.12077828977941
- type: nAUC_Recall@3_max(MIRACL)
value: 42.903379317937166
- type: nAUC_Recall@3_std(MIRACL)
value: 20.324501722161497
- type: nAUC_Recall@5_diff1(MIRACL)
value: 20.925701235197977
- type: nAUC_Recall@5_max(MIRACL)
value: 49.85323960390812
- type: nAUC_Recall@5_std(MIRACL)
value: 29.04484539530469
task:
type: Reranking
- dataset:
config: ru
name: MTEB MIRACLRetrieval (ru)
revision: main
split: dev
type: miracl/mmteb-miracl
metrics:
- type: main_score
value: 71.882
- type: map_at_1
value: 37.913000000000004
- type: map_at_10
value: 62.604000000000006
- type: map_at_100
value: 64.925
- type: map_at_1000
value: 64.992
- type: map_at_20
value: 64.081
- type: map_at_3
value: 55.212
- type: map_at_5
value: 59.445
- type: mrr_at_1
value: 73.24281150159744
- type: mrr_at_10
value: 81.65043866321825
- type: mrr_at_100
value: 81.85391378818977
- type: mrr_at_1000
value: 81.85753390802569
- type: mrr_at_20
value: 81.81045606130179
- type: mrr_at_3
value: 80.56443024494146
- type: mrr_at_5
value: 81.30724174653893
- type: nauc_map_at_1000_diff1
value: 26.962150235593356
- type: nauc_map_at_1000_max
value: 29.234958037854568
- type: nauc_map_at_1000_std
value: -2.4294465103633884
- type: nauc_map_at_100_diff1
value: 26.92990252114163
- type: nauc_map_at_100_max
value: 29.206328533120118
- type: nauc_map_at_100_std
value: -2.437371090941197
- type: nauc_map_at_10_diff1
value: 25.758265691179226
- type: nauc_map_at_10_max
value: 26.949978490795317
- type: nauc_map_at_10_std
value: -5.484961002106038
- type: nauc_map_at_1_diff1
value: 34.70849461278043
- type: nauc_map_at_1_max
value: 12.778570893623042
- type: nauc_map_at_1_std
value: -13.018292652743938
- type: nauc_map_at_20_diff1
value: 26.659923008218268
- type: nauc_map_at_20_max
value: 28.341440871568185
- type: nauc_map_at_20_std
value: -3.614549844913084
- type: nauc_map_at_3_diff1
value: 27.197629021438203
- type: nauc_map_at_3_max
value: 20.701094874050856
- type: nauc_map_at_3_std
value: -12.062992301112041
- type: nauc_map_at_5_diff1
value: 25.51793537203295
- type: nauc_map_at_5_max
value: 23.80396771243794
- type: nauc_map_at_5_std
value: -8.920465695323575
- type: nauc_mrr_at_1000_diff1
value: 45.14819989592967
- type: nauc_mrr_at_1000_max
value: 53.29202156141053
- type: nauc_mrr_at_1000_std
value: 18.037336462510524
- type: nauc_mrr_at_100_diff1
value: 45.15287600228451
- type: nauc_mrr_at_100_max
value: 53.29979751928615
- type: nauc_mrr_at_100_std
value: 18.04996604778386
- type: nauc_mrr_at_10_diff1
value: 44.96865105944474
- type: nauc_mrr_at_10_max
value: 53.53323465323092
- type: nauc_mrr_at_10_std
value: 18.25001344917689
- type: nauc_mrr_at_1_diff1
value: 46.16604946873163
- type: nauc_mrr_at_1_max
value: 48.573651103547874
- type: nauc_mrr_at_1_std
value: 13.764871626330915
- type: nauc_mrr_at_20_diff1
value: 45.11925458479102
- type: nauc_mrr_at_20_max
value: 53.35685123898342
- type: nauc_mrr_at_20_std
value: 18.127344968819905
- type: nauc_mrr_at_3_diff1
value: 45.377195452730234
- type: nauc_mrr_at_3_max
value: 53.35146309217089
- type: nauc_mrr_at_3_std
value: 17.47105877186237
- type: nauc_mrr_at_5_diff1
value: 45.00525578771549
- type: nauc_mrr_at_5_max
value: 53.76227254707128
- type: nauc_mrr_at_5_std
value: 18.437290060746957
- type: nauc_ndcg_at_1000_diff1
value: 31.19215594457491
- type: nauc_ndcg_at_1000_max
value: 38.09555406458668
- type: nauc_ndcg_at_1000_std
value: 7.225628621238009
- type: nauc_ndcg_at_100_diff1
value: 30.726331247999934
- type: nauc_ndcg_at_100_max
value: 37.81369589418277
- type: nauc_ndcg_at_100_std
value: 7.242855238555071
- type: nauc_ndcg_at_10_diff1
value: 27.514048333744835
- type: nauc_ndcg_at_10_max
value: 33.10990399385253
- type: nauc_ndcg_at_10_std
value: 0.3051899572112002
- type: nauc_ndcg_at_1_diff1
value: 47.06089085235751
- type: nauc_ndcg_at_1_max
value: 47.7300872370495
- type: nauc_ndcg_at_1_std
value: 12.468605493613916
- type: nauc_ndcg_at_20_diff1
value: 29.404215438764496
- type: nauc_ndcg_at_20_max
value: 35.26967886796471
- type: nauc_ndcg_at_20_std
value: 3.7214697890813353
- type: nauc_ndcg_at_3_diff1
value: 29.448848639643067
- type: nauc_ndcg_at_3_max
value: 33.85912412370657
- type: nauc_ndcg_at_3_std
value: 0.895453646819452
- type: nauc_ndcg_at_5_diff1
value: 26.916649012613526
- type: nauc_ndcg_at_5_max
value: 30.899005979291644
- type: nauc_ndcg_at_5_std
value: -1.0001575639156615
- type: nauc_precision_at_1000_diff1
value: -8.492004667432635
- type: nauc_precision_at_1000_max
value: 14.970190384017679
- type: nauc_precision_at_1000_std
value: 32.871386621137816
- type: nauc_precision_at_100_diff1
value: -8.287314133999967
- type: nauc_precision_at_100_max
value: 17.794821961284736
- type: nauc_precision_at_100_std
value: 35.092483550562
- type: nauc_precision_at_10_diff1
value: -7.594128993028063
- type: nauc_precision_at_10_max
value: 24.691446370325732
- type: nauc_precision_at_10_std
value: 30.126552282608493
- type: nauc_precision_at_1_diff1
value: 47.06089085235751
- type: nauc_precision_at_1_max
value: 47.7300872370495
- type: nauc_precision_at_1_std
value: 12.468605493613916
- type: nauc_precision_at_20_diff1
value: -6.503872195775146
- type: nauc_precision_at_20_max
value: 21.789730053141312
- type: nauc_precision_at_20_std
value: 32.61349377558794
- type: nauc_precision_at_3_diff1
value: 0.67417079971061
- type: nauc_precision_at_3_max
value: 30.793871354370662
- type: nauc_precision_at_3_std
value: 18.35266479252011
- type: nauc_precision_at_5_diff1
value: -7.088881730215777
- type: nauc_precision_at_5_max
value: 26.539771712769006
- type: nauc_precision_at_5_std
value: 24.116262291865834
- type: nauc_recall_at_1000_diff1
value: 34.53263588412461
- type: nauc_recall_at_1000_max
value: 63.54157869100173
- type: nauc_recall_at_1000_std
value: 64.19854844792808
- type: nauc_recall_at_100_diff1
value: 22.86564728642275
- type: nauc_recall_at_100_max
value: 40.350507162549825
- type: nauc_recall_at_100_std
value: 29.24492545863015
- type: nauc_recall_at_10_diff1
value: 15.384818367225009
- type: nauc_recall_at_10_max
value: 24.41108571453699
- type: nauc_recall_at_10_std
value: -3.9216160585776323
- type: nauc_recall_at_1_diff1
value: 34.70849461278043
- type: nauc_recall_at_1_max
value: 12.778570893623042
- type: nauc_recall_at_1_std
value: -13.018292652743938
- type: nauc_recall_at_20_diff1
value: 18.122499000084208
- type: nauc_recall_at_20_max
value: 26.63104220179424
- type: nauc_recall_at_20_std
value: 3.969217732521512
- type: nauc_recall_at_3_diff1
value: 21.413050725250116
- type: nauc_recall_at_3_max
value: 16.18894988386887
- type: nauc_recall_at_3_std
value: -15.24884339282375
- type: nauc_recall_at_5_diff1
value: 16.35673072212927
- type: nauc_recall_at_5_max
value: 18.607003829267846
- type: nauc_recall_at_5_std
value: -10.463525876945454
- type: ndcg_at_1
value: 72.923
- type: ndcg_at_10
value: 71.882
- type: ndcg_at_100
value: 77.09899999999999
- type: ndcg_at_1000
value: 77.835
- type: ndcg_at_20
value: 74.497
- type: ndcg_at_3
value: 68.504
- type: ndcg_at_5
value: 69.068
- type: precision_at_1
value: 72.923
- type: precision_at_10
value: 19.936
- type: precision_at_100
value: 2.6310000000000002
- type: precision_at_1000
value: 0.27799999999999997
- type: precision_at_20
value: 11.33
- type: precision_at_3
value: 45.927
- type: precision_at_5
value: 33.131
- type: recall_at_1
value: 37.913000000000004
- type: recall_at_10
value: 78.365
- type: recall_at_100
value: 94.348
- type: recall_at_1000
value: 98.187
- type: recall_at_20
value: 85.229
- type: recall_at_3
value: 61.42999999999999
- type: recall_at_5
value: 69.56700000000001
task:
type: Retrieval
- dataset:
config: ru
name: MTEB MassiveIntentClassification (ru)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 79.11903160726294
- type: f1
value: 76.22609082694545
- type: f1_weighted
value: 77.81461248063566
- type: main_score
value: 79.11903160726294
task:
type: Classification
- dataset:
config: ru
name: MTEB MassiveScenarioClassification (ru)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 88.80632145258912
- type: f1
value: 87.53157475314829
- type: f1_weighted
value: 88.22733432521495
- type: main_score
value: 88.80632145258912
task:
type: Classification
- dataset:
config: default
name: MTEB RUParaPhraserSTS (default)
revision: 43265056790b8f7c59e0139acb4be0a8dad2c8f4
split: test
type: merionum/ru_paraphraser
metrics:
- type: cosine_pearson
value: 72.70307124858925
- type: cosine_spearman
value: 78.09439086920204
- type: euclidean_pearson
value: 76.2033672014715
- type: euclidean_spearman
value: 78.09439086920204
- type: main_score
value: 78.09439086920204
- type: manhattan_pearson
value: 76.11750470223116
- type: manhattan_spearman
value: 78.01081063503413
- type: pearson
value: 72.70307124858925
- type: spearman
value: 78.09439086920204
task:
type: STS
- dataset:
config: default
name: MTEB RiaNewsRetrieval (default)
revision: 82374b0bbacda6114f39ff9c5b925fa1512ca5d7
split: test
type: ai-forever/ria-news-retrieval
metrics:
- type: main_score
value: 86.819
- type: map_at_1
value: 78.79
- type: map_at_10
value: 84.516
- type: map_at_100
value: 84.68
- type: map_at_1000
value: 84.685
- type: map_at_20
value: 84.624
- type: map_at_3
value: 83.722
- type: map_at_5
value: 84.246
- type: mrr_at_1
value: 78.78
- type: mrr_at_10
value: 84.51815476190441
- type: mrr_at_100
value: 84.68390840473289
- type: mrr_at_1000
value: 84.68947095200002
- type: mrr_at_20
value: 84.62958130822527
- type: mrr_at_3
value: 83.74499999999964
- type: mrr_at_5
value: 84.23849999999955
- type: nauc_map_at_1000_diff1
value: 82.09914867708899
- type: nauc_map_at_1000_max
value: 43.02024854784386
- type: nauc_map_at_1000_std
value: -22.919695880762777
- type: nauc_map_at_100_diff1
value: 82.09705922783733
- type: nauc_map_at_100_max
value: 43.02697379581718
- type: nauc_map_at_100_std
value: -22.90719212899522
- type: nauc_map_at_10_diff1
value: 82.04404594672894
- type: nauc_map_at_10_max
value: 43.06752103182731
- type: nauc_map_at_10_std
value: -23.007870153273576
- type: nauc_map_at_1_diff1
value: 83.89134152210333
- type: nauc_map_at_1_max
value: 38.083626428503415
- type: nauc_map_at_1_std
value: -25.817960401194252
- type: nauc_map_at_20_diff1
value: 82.08534662247806
- type: nauc_map_at_20_max
value: 43.074305042312346
- type: nauc_map_at_20_std
value: -22.91785703613217
- type: nauc_map_at_3_diff1
value: 81.7967508697558
- type: nauc_map_at_3_max
value: 42.90927479098251
- type: nauc_map_at_3_std
value: -24.01312203859392
- type: nauc_map_at_5_diff1
value: 81.90704517505098
- type: nauc_map_at_5_max
value: 43.05204677044616
- type: nauc_map_at_5_std
value: -23.267331507554896
- type: nauc_mrr_at_1000_diff1
value: 82.11902348082472
- type: nauc_mrr_at_1000_max
value: 43.04118936353063
- type: nauc_mrr_at_1000_std
value: -22.858804296830773
- type: nauc_mrr_at_100_diff1
value: 82.11685562002263
- type: nauc_mrr_at_100_max
value: 43.0482537895494
- type: nauc_mrr_at_100_std
value: -22.84431127787993
- type: nauc_mrr_at_10_diff1
value: 82.06909958688058
- type: nauc_mrr_at_10_max
value: 43.07921689466605
- type: nauc_mrr_at_10_std
value: -22.957623576663234
- type: nauc_mrr_at_1_diff1
value: 83.91147637794326
- type: nauc_mrr_at_1_max
value: 37.91917159543152
- type: nauc_mrr_at_1_std
value: -26.141868289283266
- type: nauc_mrr_at_20_diff1
value: 82.10314004731809
- type: nauc_mrr_at_20_max
value: 43.09295406509764
- type: nauc_mrr_at_20_std
value: -22.862091782178787
- type: nauc_mrr_at_3_diff1
value: 81.82117067269036
- type: nauc_mrr_at_3_max
value: 42.94628953323521
- type: nauc_mrr_at_3_std
value: -23.852510312400714
- type: nauc_mrr_at_5_diff1
value: 81.92857441701598
- type: nauc_mrr_at_5_max
value: 43.129719354492934
- type: nauc_mrr_at_5_std
value: -23.145342272624085
- type: nauc_ndcg_at_1000_diff1
value: 81.75015729717991
- type: nauc_ndcg_at_1000_max
value: 44.7266586308995
- type: nauc_ndcg_at_1000_std
value: -20.60663899715267
- type: nauc_ndcg_at_100_diff1
value: 81.6897808298767
- type: nauc_ndcg_at_100_max
value: 44.99492791287099
- type: nauc_ndcg_at_100_std
value: -20.09637266506936
- type: nauc_ndcg_at_10_diff1
value: 81.46290312197337
- type: nauc_ndcg_at_10_max
value: 45.30218378452244
- type: nauc_ndcg_at_10_std
value: -20.70393523891777
- type: nauc_ndcg_at_1_diff1
value: 83.89134152210333
- type: nauc_ndcg_at_1_max
value: 38.083626428503415
- type: nauc_ndcg_at_1_std
value: -25.817960401194252
- type: nauc_ndcg_at_20_diff1
value: 81.61080772657213
- type: nauc_ndcg_at_20_max
value: 45.36571800492172
- type: nauc_ndcg_at_20_std
value: -20.278763852504042
- type: nauc_ndcg_at_3_diff1
value: 80.95965359410461
- type: nauc_ndcg_at_3_max
value: 44.756971949205834
- type: nauc_ndcg_at_3_std
value: -23.07797617717319
- type: nauc_ndcg_at_5_diff1
value: 81.12417712163976
- type: nauc_ndcg_at_5_max
value: 45.15727381406512
- type: nauc_ndcg_at_5_std
value: -21.52861766165519
- type: nauc_precision_at_1000_diff1
value: 76.80566850396093
- type: nauc_precision_at_1000_max
value: 82.45685370922442
- type: nauc_precision_at_1000_std
value: 46.93570976777808
- type: nauc_precision_at_100_diff1
value: 77.21645520953484
- type: nauc_precision_at_100_max
value: 73.43604108309935
- type: nauc_precision_at_100_std
value: 31.978176891671367
- type: nauc_precision_at_10_diff1
value: 77.88251664302092
- type: nauc_precision_at_10_max
value: 60.58112638995018
- type: nauc_precision_at_10_std
value: -3.674424315180332
- type: nauc_precision_at_1_diff1
value: 83.89134152210333
- type: nauc_precision_at_1_max
value: 38.083626428503415
- type: nauc_precision_at_1_std
value: -25.817960401194252
- type: nauc_precision_at_20_diff1
value: 78.16426786697438
- type: nauc_precision_at_20_max
value: 66.0723612699222
- type: nauc_precision_at_20_std
value: 6.121527084555938
- type: nauc_precision_at_3_diff1
value: 77.43122492166451
- type: nauc_precision_at_3_max
value: 52.50727288548085
- type: nauc_precision_at_3_std
value: -19.036076920799427
- type: nauc_precision_at_5_diff1
value: 77.1127254320532
- type: nauc_precision_at_5_max
value: 56.100901899221135
- type: nauc_precision_at_5_std
value: -12.009191140844198
- type: nauc_recall_at_1000_diff1
value: 76.80566850396035
- type: nauc_recall_at_1000_max
value: 82.45685370922577
- type: nauc_recall_at_1000_std
value: 46.93570976777776
- type: nauc_recall_at_100_diff1
value: 77.21645520953459
- type: nauc_recall_at_100_max
value: 73.43604108310011
- type: nauc_recall_at_100_std
value: 31.978176891671993
- type: nauc_recall_at_10_diff1
value: 77.88251664302089
- type: nauc_recall_at_10_max
value: 60.58112638994999
- type: nauc_recall_at_10_std
value: -3.6744243151805427
- type: nauc_recall_at_1_diff1
value: 83.89134152210333
- type: nauc_recall_at_1_max
value: 38.083626428503415
- type: nauc_recall_at_1_std
value: -25.817960401194252
- type: nauc_recall_at_20_diff1
value: 78.16426786697409
- type: nauc_recall_at_20_max
value: 66.07236126992217
- type: nauc_recall_at_20_std
value: 6.121527084555941
- type: nauc_recall_at_3_diff1
value: 77.43122492166454
- type: nauc_recall_at_3_max
value: 52.507272885480816
- type: nauc_recall_at_3_std
value: -19.036076920799776
- type: nauc_recall_at_5_diff1
value: 77.11272543205318
- type: nauc_recall_at_5_max
value: 56.10090189922128
- type: nauc_recall_at_5_std
value: -12.009191140843809
- type: ndcg_at_1
value: 78.79
- type: ndcg_at_10
value: 86.819
- type: ndcg_at_100
value: 87.599
- type: ndcg_at_1000
value: 87.761
- type: ndcg_at_20
value: 87.208
- type: ndcg_at_3
value: 85.222
- type: ndcg_at_5
value: 86.164
- type: precision_at_1
value: 78.79
- type: precision_at_10
value: 9.384
- type: precision_at_100
value: 0.975
- type: precision_at_1000
value: 0.099
- type: precision_at_20
value: 4.769
- type: precision_at_3
value: 29.842999999999996
- type: precision_at_5
value: 18.362000000000002
- type: recall_at_1
value: 78.79
- type: recall_at_10
value: 93.84
- type: recall_at_100
value: 97.45
- type: recall_at_1000
value: 98.76
- type: recall_at_20
value: 95.37
- type: recall_at_3
value: 89.53
- type: recall_at_5
value: 91.81
task:
type: Retrieval
- dataset:
config: default
name: MTEB RuBQReranking (default)
revision: 2e96b8f098fa4b0950fc58eacadeb31c0d0c7fa2
split: test
type: ai-forever/rubq-reranking
metrics:
- type: main_score
value: 77.07394404835635
- type: map
value: 77.07394404835635
- type: mrr
value: 82.53144412718882
- type: nAUC_map_diff1
value: 45.29805217456628
- type: nAUC_map_max
value: 34.39894042439188
- type: nAUC_map_std
value: 21.11309674418275
- type: nAUC_mrr_diff1
value: 54.783994737367046
- type: nAUC_mrr_max
value: 45.68526733900048
- type: nAUC_mrr_std
value: 28.22466385500339
task:
type: Reranking
- dataset:
config: default
name: MTEB RuBQRetrieval (default)
revision: e19b6ffa60b3bc248e0b41f4cc37c26a55c2a67b
split: test
type: ai-forever/rubq-retrieval
metrics:
- type: main_score
value: 72.392
- type: map_at_1
value: 47.370000000000005
- type: map_at_10
value: 65.503
- type: map_at_100
value: 66.38
- type: map_at_1000
value: 66.42099999999999
- type: map_at_20
value: 66.071
- type: map_at_3
value: 61.439
- type: map_at_5
value: 63.922999999999995
- type: mrr_at_1
value: 67.37588652482269
- type: mrr_at_10
value: 76.0066747345116
- type: mrr_at_100
value: 76.25754138969413
- type: mrr_at_1000
value: 76.26968825657428
- type: mrr_at_20
value: 76.17548265904622
- type: mrr_at_3
value: 74.61583924349881
- type: mrr_at_5
value: 75.46690307328608
- type: nauc_map_at_1000_diff1
value: 42.52570720187294
- type: nauc_map_at_1000_max
value: 37.40318318724238
- type: nauc_map_at_1000_std
value: 0.6037788201535506
- type: nauc_map_at_100_diff1
value: 42.493410029691226
- type: nauc_map_at_100_max
value: 37.39802489244377
- type: nauc_map_at_100_std
value: 0.6071359951887154
- type: nauc_map_at_10_diff1
value: 42.09833519659916
- type: nauc_map_at_10_max
value: 37.1184138958874
- type: nauc_map_at_10_std
value: 0.4063543094010351
- type: nauc_map_at_1_diff1
value: 49.56605205141156
- type: nauc_map_at_1_max
value: 26.251096698710384
- type: nauc_map_at_1_std
value: -4.580748485387834
- type: nauc_map_at_20_diff1
value: 42.33372393482018
- type: nauc_map_at_20_max
value: 37.416955604649985
- type: nauc_map_at_20_std
value: 0.6050577802787294
- type: nauc_map_at_3_diff1
value: 42.362234475441845
- type: nauc_map_at_3_max
value: 34.56001379838821
- type: nauc_map_at_3_std
value: -1.507636598929042
- type: nauc_map_at_5_diff1
value: 42.0202264882535
- type: nauc_map_at_5_max
value: 36.64306050200848
- type: nauc_map_at_5_std
value: -0.09509025708798424
- type: nauc_mrr_at_1000_diff1
value: 58.99601742026931
- type: nauc_mrr_at_1000_max
value: 49.61561872452777
- type: nauc_mrr_at_1000_std
value: 2.3956102974352356
- type: nauc_mrr_at_100_diff1
value: 58.9865943101085
- type: nauc_mrr_at_100_max
value: 49.6248111507265
- type: nauc_mrr_at_100_std
value: 2.411155095066369
- type: nauc_mrr_at_10_diff1
value: 58.81758131092919
- type: nauc_mrr_at_10_max
value: 49.780365572616695
- type: nauc_mrr_at_10_std
value: 2.7068696565195944
- type: nauc_mrr_at_1_diff1
value: 61.67036882487055
- type: nauc_mrr_at_1_max
value: 45.455271042821714
- type: nauc_mrr_at_1_std
value: -0.9370526815458349
- type: nauc_mrr_at_20_diff1
value: 58.93674818203478
- type: nauc_mrr_at_20_max
value: 49.703218108625215
- type: nauc_mrr_at_20_std
value: 2.4473106598190415
- type: nauc_mrr_at_3_diff1
value: 59.046856598788445
- type: nauc_mrr_at_3_max
value: 49.37161726123392
- type: nauc_mrr_at_3_std
value: 1.5110936686701506
- type: nauc_mrr_at_5_diff1
value: 58.92289378915668
- type: nauc_mrr_at_5_max
value: 49.847638994134144
- type: nauc_mrr_at_5_std
value: 2.420421880131702
- type: nauc_ndcg_at_1000_diff1
value: 45.56062215161734
- type: nauc_ndcg_at_1000_max
value: 41.507152286702
- type: nauc_ndcg_at_1000_std
value: 2.79388283208751
- type: nauc_ndcg_at_100_diff1
value: 44.84064192570408
- type: nauc_ndcg_at_100_max
value: 41.50353573562353
- type: nauc_ndcg_at_100_std
value: 3.1804999773629357
- type: nauc_ndcg_at_10_diff1
value: 43.341482144213614
- type: nauc_ndcg_at_10_max
value: 41.159590898395074
- type: nauc_ndcg_at_10_std
value: 2.945242338240843
- type: nauc_ndcg_at_1_diff1
value: 62.23623985611396
- type: nauc_ndcg_at_1_max
value: 45.04945770947091
- type: nauc_ndcg_at_1_std
value: -0.8804967656575725
- type: nauc_ndcg_at_20_diff1
value: 43.905372612093664
- type: nauc_ndcg_at_20_max
value: 41.797709837872446
- type: nauc_ndcg_at_20_std
value: 3.1853356915569653
- type: nauc_ndcg_at_3_diff1
value: 44.18163998834299
- type: nauc_ndcg_at_3_max
value: 38.352891017864636
- type: nauc_ndcg_at_3_std
value: -0.8235767021150929
- type: nauc_ndcg_at_5_diff1
value: 43.41374688421302
- type: nauc_ndcg_at_5_max
value: 40.390365601593956
- type: nauc_ndcg_at_5_std
value: 1.6743650108127537
- type: nauc_precision_at_1000_diff1
value: -9.711058370691381
- type: nauc_precision_at_1000_max
value: 6.97321343449286
- type: nauc_precision_at_1000_std
value: 7.933531916622121
- type: nauc_precision_at_100_diff1
value: -8.247029644152319
- type: nauc_precision_at_100_max
value: 10.86740140944616
- type: nauc_precision_at_100_std
value: 9.581885544675918
- type: nauc_precision_at_10_diff1
value: -2.409043695429943
- type: nauc_precision_at_10_max
value: 21.04733206074314
- type: nauc_precision_at_10_std
value: 10.03334651647101
- type: nauc_precision_at_1_diff1
value: 62.23623985611396
- type: nauc_precision_at_1_max
value: 45.04945770947091
- type: nauc_precision_at_1_std
value: -0.8804967656575725
- type: nauc_precision_at_20_diff1
value: -5.230303656931621
- type: nauc_precision_at_20_max
value: 17.77799716919181
- type: nauc_precision_at_20_std
value: 10.739127998618654
- type: nauc_precision_at_3_diff1
value: 10.40376424999862
- type: nauc_precision_at_3_max
value: 30.933333400254035
- type: nauc_precision_at_3_std
value: 6.126209127968004
- type: nauc_precision_at_5_diff1
value: 3.147398101830739
- type: nauc_precision_at_5_max
value: 27.1746309955971
- type: nauc_precision_at_5_std
value: 8.874723615388788
- type: nauc_recall_at_1000_diff1
value: 5.055940692380908
- type: nauc_recall_at_1000_max
value: 22.42031123370267
- type: nauc_recall_at_1000_std
value: 27.75476692527869
- type: nauc_recall_at_100_diff1
value: 17.86391178198642
- type: nauc_recall_at_100_max
value: 34.776134863678955
- type: nauc_recall_at_100_std
value: 18.96377158778504
- type: nauc_recall_at_10_diff1
value: 24.863097695413597
- type: nauc_recall_at_10_max
value: 37.697411651507444
- type: nauc_recall_at_10_std
value: 9.519849994253967
- type: nauc_recall_at_1_diff1
value: 49.56605205141156
- type: nauc_recall_at_1_max
value: 26.251096698710384
- type: nauc_recall_at_1_std
value: -4.580748485387834
- type: nauc_recall_at_20_diff1
value: 22.440602811005636
- type: nauc_recall_at_20_max
value: 39.538861316515
- type: nauc_recall_at_20_std
value: 11.363269553121468
- type: nauc_recall_at_3_diff1
value: 32.80302839873736
- type: nauc_recall_at_3_max
value: 32.53105685012729
- type: nauc_recall_at_3_std
value: -0.7140166410605693
- type: nauc_recall_at_5_diff1
value: 29.375386639154865
- type: nauc_recall_at_5_max
value: 36.91045781164083
- type: nauc_recall_at_5_std
value: 4.725419050262578
- type: ndcg_at_1
value: 67.13900000000001
- type: ndcg_at_10
value: 72.392
- type: ndcg_at_100
value: 75.25800000000001
- type: ndcg_at_1000
value: 75.982
- type: ndcg_at_20
value: 73.783
- type: ndcg_at_3
value: 67.269
- type: ndcg_at_5
value: 69.807
- type: precision_at_1
value: 67.13900000000001
- type: precision_at_10
value: 13.327
- type: precision_at_100
value: 1.5559999999999998
- type: precision_at_1000
value: 0.164
- type: precision_at_20
value: 7.119000000000001
- type: precision_at_3
value: 35.599
- type: precision_at_5
value: 23.936
- type: recall_at_1
value: 47.370000000000005
- type: recall_at_10
value: 82.16
- type: recall_at_100
value: 93.34
- type: recall_at_1000
value: 98.202
- type: recall_at_20
value: 86.687
- type: recall_at_3
value: 69.319
- type: recall_at_5
value: 75.637
task:
type: Retrieval
- dataset:
config: default
name: MTEB RuReviewsClassification (default)
revision: f6d2c31f4dc6b88f468552750bfec05b4b41b05a
split: test
type: ai-forever/ru-reviews-classification
metrics:
- type: accuracy
value: 75.0537109375
- type: f1
value: 74.00523205209554
- type: f1_weighted
value: 74.00436782840376
- type: main_score
value: 75.0537109375
task:
type: Classification
- dataset:
config: default
name: MTEB RuSTSBenchmarkSTS (default)
revision: 7cf24f325c6da6195df55bef3d86b5e0616f3018
split: test
type: ai-forever/ru-stsbenchmark-sts
metrics:
- type: cosine_pearson
value: 81.10255413476487
- type: cosine_spearman
value: 81.40020843157141
- type: euclidean_pearson
value: 81.25155479902466
- type: euclidean_spearman
value: 81.40020831064922
- type: main_score
value: 81.40020843157141
- type: manhattan_pearson
value: 81.1493715249014
- type: manhattan_spearman
value: 81.30973667941649
- type: pearson
value: 81.10255413476487
- type: spearman
value: 81.40020843157141
task:
type: STS
- dataset:
config: default
name: MTEB RuSciBenchGRNTIClassification (default)
revision: 673a610d6d3dd91a547a0d57ae1b56f37ebbf6a1
split: test
type: ai-forever/ru-scibench-grnti-classification
metrics:
- type: accuracy
value: 69.8974609375
- type: f1
value: 68.57837564785511
- type: f1_weighted
value: 68.59030489460784
- type: main_score
value: 69.8974609375
task:
type: Classification
- dataset:
config: default
name: MTEB RuSciBenchGRNTIClusteringP2P (default)
revision: 673a610d6d3dd91a547a0d57ae1b56f37ebbf6a1
split: test
type: ai-forever/ru-scibench-grnti-classification
metrics:
- type: main_score
value: 67.03880348548029
- type: v_measure
value: 67.03880348548029
- type: v_measure_std
value: 0.6126278133139618
task:
type: Clustering
- dataset:
config: default
name: MTEB RuSciBenchOECDClassification (default)
revision: 26c88e99dcaba32bb45d0e1bfc21902337f6d471
split: test
type: ai-forever/ru-scibench-oecd-classification
metrics:
- type: accuracy
value: 54.63378906250001
- type: f1
value: 51.34306420274629
- type: f1_weighted
value: 51.33495867493914
- type: main_score
value: 54.63378906250001
task:
type: Classification
- dataset:
config: default
name: MTEB RuSciBenchOECDClusteringP2P (default)
revision: 26c88e99dcaba32bb45d0e1bfc21902337f6d471
split: test
type: ai-forever/ru-scibench-oecd-classification
metrics:
- type: main_score
value: 56.55947121159027
- type: v_measure
value: 56.55947121159027
- type: v_measure_std
value: 0.5498882006880662
task:
type: Clustering
- dataset:
config: ru
name: MTEB STS22 (ru)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics:
- type: cosine_pearson
value: 61.833294921667914
- type: cosine_spearman
value: 63.53967536726357
- type: euclidean_pearson
value: 60.382865218855805
- type: euclidean_spearman
value: 63.53967536726357
- type: main_score
value: 63.53967536726357
- type: manhattan_pearson
value: 60.24879015304578
- type: manhattan_spearman
value: 63.42305760430092
- type: pearson
value: 61.833294921667914
- type: spearman
value: 63.53967536726357
task:
type: STS
- dataset:
config: default
name: MTEB SensitiveTopicsClassification (default)
revision: 416b34a802308eac30e4192afc0ff99bb8dcc7f2
split: test
type: ai-forever/sensitive-topics-classification
metrics:
- type: accuracy
value: 39.8193359375
- type: f1
value: 55.46591740935434
- type: lrap
value: 66.50980631510454
- type: main_score
value: 39.8193359375
task:
type: MultilabelClassification
- dataset:
config: default
name: MTEB TERRa (default)
revision: 7b58f24536063837d644aab9a023c62199b2a612
split: dev
type: ai-forever/terra-pairclassification
metrics:
- type: cosine_accuracy
value: 66.77524429967427
- type: cosine_accuracy_threshold
value: 55.58975338935852
- type: cosine_ap
value: 66.4567219323658
- type: cosine_f1
value: 70.64676616915423
- type: cosine_f1_threshold
value: 45.55969536304474
- type: cosine_precision
value: 57.028112449799195
- type: cosine_recall
value: 92.81045751633987
- type: dot_accuracy
value: 66.77524429967427
- type: dot_accuracy_threshold
value: 55.589759349823
- type: dot_ap
value: 66.4567219323658
- type: dot_f1
value: 70.64676616915423
- type: dot_f1_threshold
value: 45.55969536304474
- type: dot_precision
value: 57.028112449799195
- type: dot_recall
value: 92.81045751633987
- type: euclidean_accuracy
value: 66.77524429967427
- type: euclidean_accuracy_threshold
value: 94.24455165863037
- type: euclidean_ap
value: 66.4567219323658
- type: euclidean_f1
value: 70.64676616915423
- type: euclidean_f1_threshold
value: 104.34587001800537
- type: euclidean_precision
value: 57.028112449799195
- type: euclidean_recall
value: 92.81045751633987
- type: main_score
value: 66.4567219323658
- type: manhattan_accuracy
value: 66.77524429967427
- type: manhattan_accuracy_threshold
value: 2865.5345916748047
- type: manhattan_ap
value: 66.26659863769075
- type: manhattan_f1
value: 70.8542713567839
- type: manhattan_f1_threshold
value: 3212.3912811279297
- type: manhattan_precision
value: 57.55102040816327
- type: manhattan_recall
value: 92.15686274509804
- type: max_accuracy
value: 66.77524429967427
- type: max_ap
value: 66.4567219323658
- type: max_f1
value: 70.8542713567839
- type: max_precision
value: 57.55102040816327
- type: max_recall
value: 92.81045751633987
- type: similarity_accuracy
value: 66.77524429967427
- type: similarity_accuracy_threshold
value: 55.58975338935852
- type: similarity_ap
value: 66.4567219323658
- type: similarity_f1
value: 70.64676616915423
- type: similarity_f1_threshold
value: 45.55969536304474
- type: similarity_precision
value: 57.028112449799195
- type: similarity_recall
value: 92.81045751633987
task:
type: PairClassification
license: mit
language:
- ru
- en
tags:
- mteb
- transformers
- sentence-transformers
base_model: ai-forever/FRED-T5-1.7B
pipeline_tag: feature-extraction
datasets:
- ai-forever/solyanka
---
# Model Card for FRIDA
<figure>
<img src="img.jpg">
</figure>
FRIDA is a full-scale finetuned general text embedding model inspired by denoising architecture based on T5. The model is based on the encoder part of [FRED-T5](https://arxiv.org/abs/2309.10931) model and continues research of text embedding models ([ruMTEB](https://arxiv.org/abs/2408.12503), [ru-en-RoSBERTa](https://huggingface.co/ai-forever/ru-en-RoSBERTa)). It has been pre-trained on a Russian-English dataset and fine-tuned for improved performance on the target task.
For more model details please refer to our [article](https://habr.com/ru/companies/sberdevices/articles/909924/) (RU).
## Usage
The model can be used as is with prefixes. It is recommended to use CLS pooling. The choice of prefix and pooling depends on the task.
We use the following basic rules to choose a prefix:
- `"search_query: "` and `"search_document: "` prefixes are for answer or relevant paragraph retrieval
- `"paraphrase: "` prefix is for symmetric paraphrasing related tasks (STS, paraphrase mining, deduplication)
- `"categorize: "` prefix is for asymmetric matching of document title and body (e.g. news, scientific papers, social posts)
- `"categorize_sentiment: "` prefix is for any tasks that rely on sentiment features (e.g. hate, toxic, emotion)
- `"categorize_topic: "` prefix is intended for tasks where you need to group texts by topic
- `"categorize_entailment: "` prefix is for textual entailment task (NLI)
To better tailor the model to your needs, you can fine-tune it with relevant high-quality Russian and English datasets.
Below are examples of texts encoding using the Transformers and SentenceTransformers libraries.
### Transformers
```python
import torch
import torch.nn.functional as F
from transformers import AutoTokenizer, T5EncoderModel
def pool(hidden_state, mask, pooling_method="cls"):
if pooling_method == "mean":
s = torch.sum(hidden_state * mask.unsqueeze(-1).float(), dim=1)
d = mask.sum(axis=1, keepdim=True).float()
return s / d
elif pooling_method == "cls":
return hidden_state[:, 0]
inputs = [
#
"paraphrase: Π Π―ΡΠΎΡΠ»Π°Π²ΡΠΊΠΎΠΉ ΠΎΠ±Π»Π°ΡΡΠΈ ΡΠ°Π·ΡΠ΅ΡΠΈΠ»ΠΈ ΡΠ°Π±ΠΎΡΡ Π±Π°Π½Ρ, Π½ΠΎ Π±Π΅Π· ΠΏΠΎΡΠ΅ΡΠΈΡΠ΅Π»Π΅ΠΉ",
"categorize_entailment: ΠΠ΅Π½ΡΠΈΠ½Ρ Π΄ΠΎΡΡΠ°Π²ΠΈΠ»ΠΈ Π² Π±ΠΎΠ»ΡΠ½ΠΈΡΡ, Π·Π° Π΅Π΅ ΠΆΠΈΠ·Π½Ρ ΡΠ΅ΠΉΡΠ°Ρ Π±ΠΎΡΡΡΡΡ Π²ΡΠ°ΡΠΈ.",
"search_query: Π‘ΠΊΠΎΠ»ΡΠΊΠΎ ΠΏΡΠΎΠ³ΡΠ°ΠΌΠΌΠΈΡΡΠΎΠ² Π½ΡΠΆΠ½ΠΎ, ΡΡΠΎΠ±Ρ Π²ΠΊΡΡΡΠΈΡΡ Π»Π°ΠΌΠΏΠΎΡΠΊΡ?",
#
"paraphrase: Π―ΡΠΎΡΠ»Π°Π²ΡΠΊΠΈΠΌ Π±Π°Π½ΡΠΌ ΡΠ°Π·ΡΠ΅ΡΠΈΠ»ΠΈ ΡΠ°Π±ΠΎΡΠ°ΡΡ Π±Π΅Π· ΠΏΠΎΡΠ΅ΡΠΈΡΠ΅Π»Π΅ΠΉ",
"categorize_entailment: ΠΠ΅Π½ΡΠΈΠ½Ρ ΡΠΏΠ°ΡΠ°ΡΡ Π²ΡΠ°ΡΠΈ.",
"search_document: Π§ΡΠΎΠ±Ρ Π²ΠΊΡΡΡΠΈΡΡ Π»Π°ΠΌΠΏΠΎΡΠΊΡ, ΡΡΠ΅Π±ΡΠ΅ΡΡΡ ΡΡΠΈ ΠΏΡΠΎΠ³ΡΠ°ΠΌΠΌΠΈΡΡΠ°: ΠΎΠ΄ΠΈΠ½ Π½Π°ΠΏΠΈΡΠ΅Ρ ΠΏΡΠΎΠ³ΡΠ°ΠΌΠΌΡ ΠΈΠ·Π²Π»Π΅ΡΠ΅Π½ΠΈΡ Π»Π°ΠΌΠΏΠΎΡΠΊΠΈ, Π΄ΡΡΠ³ΠΎΠΉ β Π²ΠΊΡΡΡΠΈΠ²Π°Π½ΠΈΡ Π»Π°ΠΌΠΏΠΎΡΠΊΠΈ, Π° ΡΡΠ΅ΡΠΈΠΉ ΠΏΡΠΎΠ²Π΅Π΄Π΅Ρ ΡΠ΅ΡΡΠΈΡΠΎΠ²Π°Π½ΠΈΠ΅."
]
tokenizer = AutoTokenizer.from_pretrained("ai-forever/FRIDA")
model = T5EncoderModel.from_pretrained("ai-forever/FRIDA")
tokenized_inputs = tokenizer(inputs, max_length=512, padding=True, truncation=True, return_tensors="pt")
with torch.no_grad():
outputs = model(**tokenized_inputs)
embeddings = pool(
outputs.last_hidden_state,
tokenized_inputs["attention_mask"],
pooling_method="cls" # or try "mean"
)
embeddings = F.normalize(embeddings, p=2, dim=1)
sim_scores = embeddings[:3] @ embeddings[3:].T
print(sim_scores.diag().tolist())
# [0.9360030293464661, 0.8591322302818298, 0.728583037853241]
```
### SentenceTransformers
```python
from sentence_transformers import SentenceTransformer
inputs = [
#
"paraphrase: Π Π―ΡΠΎΡΠ»Π°Π²ΡΠΊΠΎΠΉ ΠΎΠ±Π»Π°ΡΡΠΈ ΡΠ°Π·ΡΠ΅ΡΠΈΠ»ΠΈ ΡΠ°Π±ΠΎΡΡ Π±Π°Π½Ρ, Π½ΠΎ Π±Π΅Π· ΠΏΠΎΡΠ΅ΡΠΈΡΠ΅Π»Π΅ΠΉ",
"categorize_entailment: ΠΠ΅Π½ΡΠΈΠ½Ρ Π΄ΠΎΡΡΠ°Π²ΠΈΠ»ΠΈ Π² Π±ΠΎΠ»ΡΠ½ΠΈΡΡ, Π·Π° Π΅Π΅ ΠΆΠΈΠ·Π½Ρ ΡΠ΅ΠΉΡΠ°Ρ Π±ΠΎΡΡΡΡΡ Π²ΡΠ°ΡΠΈ.",
"search_query: Π‘ΠΊΠΎΠ»ΡΠΊΠΎ ΠΏΡΠΎΠ³ΡΠ°ΠΌΠΌΠΈΡΡΠΎΠ² Π½ΡΠΆΠ½ΠΎ, ΡΡΠΎΠ±Ρ Π²ΠΊΡΡΡΠΈΡΡ Π»Π°ΠΌΠΏΠΎΡΠΊΡ?",
#
"paraphrase: Π―ΡΠΎΡΠ»Π°Π²ΡΠΊΠΈΠΌ Π±Π°Π½ΡΠΌ ΡΠ°Π·ΡΠ΅ΡΠΈΠ»ΠΈ ΡΠ°Π±ΠΎΡΠ°ΡΡ Π±Π΅Π· ΠΏΠΎΡΠ΅ΡΠΈΡΠ΅Π»Π΅ΠΉ",
"categorize_entailment: ΠΠ΅Π½ΡΠΈΠ½Ρ ΡΠΏΠ°ΡΠ°ΡΡ Π²ΡΠ°ΡΠΈ.",
"search_document: Π§ΡΠΎΠ±Ρ Π²ΠΊΡΡΡΠΈΡΡ Π»Π°ΠΌΠΏΠΎΡΠΊΡ, ΡΡΠ΅Π±ΡΠ΅ΡΡΡ ΡΡΠΈ ΠΏΡΠΎΠ³ΡΠ°ΠΌΠΌΠΈΡΡΠ°: ΠΎΠ΄ΠΈΠ½ Π½Π°ΠΏΠΈΡΠ΅Ρ ΠΏΡΠΎΠ³ΡΠ°ΠΌΠΌΡ ΠΈΠ·Π²Π»Π΅ΡΠ΅Π½ΠΈΡ Π»Π°ΠΌΠΏΠΎΡΠΊΠΈ, Π΄ΡΡΠ³ΠΎΠΉ β Π²ΠΊΡΡΡΠΈΠ²Π°Π½ΠΈΡ Π»Π°ΠΌΠΏΠΎΡΠΊΠΈ, Π° ΡΡΠ΅ΡΠΈΠΉ ΠΏΡΠΎΠ²Π΅Π΄Π΅Ρ ΡΠ΅ΡΡΠΈΡΠΎΠ²Π°Π½ΠΈΠ΅."
]
# loads model with CLS pooling
model = SentenceTransformer("ai-forever/FRIDA")
# embeddings are normalized by default
embeddings = model.encode(inputs, convert_to_tensor=True)
sim_scores = embeddings[:3] @ embeddings[3:].T
print(sim_scores.diag().tolist())
# [0.9360026717185974, 0.8591331243515015, 0.7285830974578857]
```
or using prompts (sentence-transformers>=2.4.0):
```python
from sentence_transformers import SentenceTransformer
# loads model with CLS pooling
model = SentenceTransformer("ai-forever/FRIDA")
paraphrase = model.encode(["Π Π―ΡΠΎΡΠ»Π°Π²ΡΠΊΠΎΠΉ ΠΎΠ±Π»Π°ΡΡΠΈ ΡΠ°Π·ΡΠ΅ΡΠΈΠ»ΠΈ ΡΠ°Π±ΠΎΡΡ Π±Π°Π½Ρ, Π½ΠΎ Π±Π΅Π· ΠΏΠΎΡΠ΅ΡΠΈΡΠ΅Π»Π΅ΠΉ", "Π―ΡΠΎΡΠ»Π°Π²ΡΠΊΠΈΠΌ Π±Π°Π½ΡΠΌ ΡΠ°Π·ΡΠ΅ΡΠΈΠ»ΠΈ ΡΠ°Π±ΠΎΡΠ°ΡΡ Π±Π΅Π· ΠΏΠΎΡΠ΅ΡΠΈΡΠ΅Π»Π΅ΠΉ"], prompt_name="paraphrase")
print(paraphrase[0] @ paraphrase[1].T) # 0.9360032
categorize_entailment = model.encode(["ΠΠ΅Π½ΡΠΈΠ½Ρ Π΄ΠΎΡΡΠ°Π²ΠΈΠ»ΠΈ Π² Π±ΠΎΠ»ΡΠ½ΠΈΡΡ, Π·Π° Π΅Π΅ ΠΆΠΈΠ·Π½Ρ ΡΠ΅ΠΉΡΠ°Ρ Π±ΠΎΡΡΡΡΡ Π²ΡΠ°ΡΠΈ.", "ΠΠ΅Π½ΡΠΈΠ½Ρ ΡΠΏΠ°ΡΠ°ΡΡ Π²ΡΠ°ΡΠΈ."], prompt_name="categorize_entailment")
print(categorize_entailment[0] @ categorize_entailment[1].T) # 0.8591322
query_embedding = model.encode("Π‘ΠΊΠΎΠ»ΡΠΊΠΎ ΠΏΡΠΎΠ³ΡΠ°ΠΌΠΌΠΈΡΡΠΎΠ² Π½ΡΠΆΠ½ΠΎ, ΡΡΠΎΠ±Ρ Π²ΠΊΡΡΡΠΈΡΡ Π»Π°ΠΌΠΏΠΎΡΠΊΡ?", prompt_name="search_query")
document_embedding = model.encode("Π§ΡΠΎΠ±Ρ Π²ΠΊΡΡΡΠΈΡΡ Π»Π°ΠΌΠΏΠΎΡΠΊΡ, ΡΡΠ΅Π±ΡΠ΅ΡΡΡ ΡΡΠΈ ΠΏΡΠΎΠ³ΡΠ°ΠΌΠΌΠΈΡΡΠ°: ΠΎΠ΄ΠΈΠ½ Π½Π°ΠΏΠΈΡΠ΅Ρ ΠΏΡΠΎΠ³ΡΠ°ΠΌΠΌΡ ΠΈΠ·Π²Π»Π΅ΡΠ΅Π½ΠΈΡ Π»Π°ΠΌΠΏΠΎΡΠΊΠΈ, Π΄ΡΡΠ³ΠΎΠΉ β Π²ΠΊΡΡΡΠΈΠ²Π°Π½ΠΈΡ Π»Π°ΠΌΠΏΠΎΡΠΊΠΈ, Π° ΡΡΠ΅ΡΠΈΠΉ ΠΏΡΠΎΠ²Π΅Π΄Π΅Ρ ΡΠ΅ΡΡΠΈΡΠΎΠ²Π°Π½ΠΈΠ΅.", prompt_name="search_document")
print(query_embedding @ document_embedding.T) # 0.7285831
```
## Authors
+ [SaluteDevices](https://sberdevices.ru/) AI for B2C RnD Team.
+ Artem Snegirev: [HF profile](https://huggingface.co/artemsnegirev), [Github](https://github.com/artemsnegirev);
+ Anna Maksimova [HF profile](https://huggingface.co/anpalmak);
+ Aleksandr Abramov: [HF profile](https://huggingface.co/Andrilko), [Github](https://github.com/Ab1992ao), [Kaggle Competitions Master](https://www.kaggle.com/andrilko)
## Citation
```
@misc{TODO
}
```
## Limitations
The model is designed to process texts in Russian, the quality in English is unknown. Maximum input text length is limited to 512 tokens. |
Hyper-AI-Computer/FlaxLlama-Init-Model-V4 | Hyper-AI-Computer | 2025-05-26T09:04:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-26T08:53:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
```json
{
"bytes accessed0{}":38364528640.0,
"transcendentals":2114048000.0,
"utilization1{}":583.0,
"bytes accessed1{}":32694747136.0,
"utilization2{}":61.0,
"bytes accessedout{}":38372065280.0,
"utilization0{}":554.0,
"bytes accessed2{}":2025914368.0,
"bytes accessed":95467569152.0,
"flops":3053315162112.0
}
```
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
foreseeitwithme/real-estate-qa-synthetic | foreseeitwithme | 2025-05-26T09:03:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"dataset:foreseeitwithme/real-estate-qa-synthetic",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-26T08:18:30Z | ---
library_name: transformers
datasets:
- foreseeitwithme/real-estate-qa-synthetic
metrics:
- bertscore
---
# Model Card for foreseeitwithme/real-estate-qa-synthetic
## Model Details
### Model Description
This model is a fine-tuned language model based on Qwen1.5-0.5B, specialized for answering real estate related questions. It was trained on a synthetic dataset designed to cover common queries about property buying, selling, legal considerations, and terminology differences in the Indian real estate context.
- **Developed by:** [Priyanka S]
- **Model type:** Transformer-based causal language model (Qwen1.5-0.5B)
- **Language(s) (NLP):** English
- **License:** []
- **Finetuned from model:** Qwen1.5-0.5B
### Model Sources
- **Repository:** https://huggingface.co/foreseeitwithme/real-estate-qa-synthetic
- **Dataset:** foreseeitwithme/real-estate-qa-synthetic
- **Paper/Demo:** Not available
## Uses
### Direct Use
This model can be used to provide answers to frequently asked questions related to Indian real estate, such as document requirements, price ranges, legal issues, and terminology explanations.
### Downstream Use
It can be integrated into chatbots, customer support systems, or property listing platforms to improve user experience by providing accurate, domain-specific responses.
### Out-of-Scope Use
Not suitable for legal advice or complex property transaction decisions. It should not replace consultation with qualified legal professionals.
## Bias, Risks, and Limitations
The model is trained on synthetic data and may not cover all real-world edge cases or regional variations comprehensively. There is a risk of outdated or inaccurate legal information.
### Recommendations
Users should verify critical information with official sources. Continuous updates and retraining are recommended to maintain accuracy.
## How to Get Started with the Model
Use the Hugging Face Transformers pipeline with your preferred framework:
```python
from transformers import pipeline
qa_pipeline = pipeline("question-answering", model="foreseeitwithme/real-estate-qa-synthetic")
result = qa_pipeline({
"question": "What documents are required for buying a flat in Bangalore?",
"context": "Provide any relevant text or context here if needed."
})
print(result)
|
ProductGuySensei/imagesofbro | ProductGuySensei | 2025-05-26T09:03:21Z | 13 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-20T15:10:12Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: cover
---
# Imagesofbro
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `cover` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "cover",
"lora_weights": "https://huggingface.co/ProductGuySensei/imagesofbro/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('ProductGuySensei/imagesofbro', weight_name='lora.safetensors')
image = pipeline('cover').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/ProductGuySensei/imagesofbro/discussions) to add images that show off what youβve made with this LoRA.
|
ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_HotpotQa_e1 | ahmedelgebaly | 2025-05-26T09:02:42Z | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"license:llama3",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-05-24T19:47:49Z | ---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: llama-3.1-8b-squadv2_SciQ_HotpotQa_e1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: meta-llama/Meta-Llama-3.1-8B # same model you originally used
# Load your previously fine-tuned model as a PEFT adapter
peft_model: ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_e1
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: ahmedelgebaly/HotpotQA_Alpaca
type: alpaca
split: train
test_datasets:
- path: ahmedelgebaly/HotpotQA_Alpaca
type: alpaca
split: validation
dataset_prepared_path:
output_dir: ./outputs/qlora-out
adapter: qlora
lora_model_dir:
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: llama-3.1-8b-squadv2_SciQ_HotpotQa_e1
wandb_entity:
wandb_watch:
wandb_name: llama-3.1-8b-squadv2_SciQ_HotpotQa_e1
wandb_log_model:
hub_model_id: ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_HotpotQa_e1
gradient_accumulation_steps: 4
micro_batch_size: 4
num_epochs: 1
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
pad_token: "<|end_of_text|>"
```
</details><br>
# llama-3.1-8b-squadv2_SciQ_HotpotQa_e1
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7263
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5019 | 0.0005 | 1 | 1.6930 |
| 0.6246 | 0.2501 | 486 | 0.7879 |
| 0.6935 | 0.5001 | 972 | 0.7512 |
| 0.5706 | 0.7502 | 1458 | 0.7263 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.45.2
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
rafaelrnzo/mistral-summarize-16bit-7b | rafaelrnzo | 2025-05-26T09:02:24Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-26T08:38:23Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bigband/EnchantingDumuzi | bigband | 2025-05-26T09:02:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"BifrΓΆst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-26T08:53:00Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, youβre required to review and agree to
Googleβs usage license. To do this, please ensure youβre logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- BifrΓΆst
- Bifrost
- code
---
## BifrΓΆst-27B

BifrΓΆst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, BifrΓΆst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** BifrΓΆst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
BifrΓΆst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- BifrΓΆst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
cheetahbooked/rl-doom-health-gathering-supreme | cheetahbooked | 2025-05-26T09:01:50Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-05-25T04:48:53Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 6.50 +/- 2.70
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r cheetahbooked/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_HotpotQa_e2 | ahmedelgebaly | 2025-05-26T09:01:38Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"license:llama3",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-05-24T20:16:35Z | ---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: llama-3.1-8b-squadv2_SciQ_HotpotQa_e2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: meta-llama/Meta-Llama-3.1-8B # same model you originally used
# Load your previously fine-tuned model as a PEFT adapter
peft_model: ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_e2
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: ahmedelgebaly/HotpotQA_Alpaca
type: alpaca
split: train
test_datasets:
- path: ahmedelgebaly/HotpotQA_Alpaca
type: alpaca
split: validation
dataset_prepared_path:
output_dir: ./outputs/qlora-out
adapter: qlora
lora_model_dir:
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: llama-3.1-8b-squadv2_SciQ_HotpotQa_e2
wandb_entity:
wandb_watch:
wandb_name: llama-3.1-8b-squadv2_SciQ_HotpotQa_e2
wandb_log_model:
hub_model_id: ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_HotpotQa_e2
gradient_accumulation_steps: 4
micro_batch_size: 4
num_epochs: 2
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
pad_token: "<|end_of_text|>"
```
</details><br>
# llama-3.1-8b-squadv2_SciQ_HotpotQa_e2
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7109
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5019 | 0.0005 | 1 | 1.6930 |
| 0.6274 | 0.2501 | 486 | 0.7914 |
| 0.6962 | 0.5001 | 972 | 0.7567 |
| 0.5719 | 0.7502 | 1458 | 0.7311 |
| 0.6021 | 1.0003 | 1944 | 0.7159 |
| 0.5002 | 1.2483 | 2430 | 0.7223 |
| 0.5363 | 1.4983 | 2916 | 0.7147 |
| 0.5215 | 1.7484 | 3402 | 0.7109 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.45.2
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Nerva1228/kxnainai | Nerva1228 | 2025-05-26T09:00:58Z | 13 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-23T08:13:09Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: kxnainai
---
# Kxnainai
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `kxnainai` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "kxnainai",
"lora_weights": "https://huggingface.co/Nerva1228/kxnainai/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Nerva1228/kxnainai', weight_name='lora.safetensors')
image = pipeline('kxnainai').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Nerva1228/kxnainai/discussions) to add images that show off what youβve made with this LoRA.
|
leobianco/npov_RM_model_google_seed_12345_SYN_LLM_false_SYN_STRUCT_false_epochs_3_lr_1e-4_lora_1 | leobianco | 2025-05-26T09:00:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-26T08:50:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
VamshiSurya/CareerCompassModel | VamshiSurya | 2025-05-26T08:59:49Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-26T08:58:37Z | ---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** VamshiSurya
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Aluba/zombie2505_28 | Aluba | 2025-05-26T08:57:38Z | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
]
| any-to-any | 2025-05-26T08:30:08Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
bigband/IllustriousAmun | bigband | 2025-05-26T08:57:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"BifrΓΆst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-26T08:46:00Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, youβre required to review and agree to
Googleβs usage license. To do this, please ensure youβre logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- BifrΓΆst
- Bifrost
- code
---
## BifrΓΆst-27B

BifrΓΆst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, BifrΓΆst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** BifrΓΆst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
BifrΓΆst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- BifrΓΆst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
Aluba/zombie2505_26 | Aluba | 2025-05-26T08:56:49Z | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
]
| any-to-any | 2025-05-26T08:29:54Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_E2_V2 | ahmedelgebaly | 2025-05-26T08:54:06Z | 12 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"license:llama3",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-05-09T15:16:11Z | ---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: llama-3.1-8b-squadv2_SciQ_E2_V2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: meta-llama/Meta-Llama-3.1-8B
lora_model_dir: ahmedelgebaly/llama-3.1-8b-squadv2_E1_V2
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: ahmedelgebaly/SciQ_Alpaca
type: alpaca
split: train
- path: ahmedelgebaly/SQuad_2_Alpaca
type: alpaca
split: train
percentage: 0.1 # small replay buffer to avoid forgetting
test_datasets:
- path: ahmedelgebaly/SciQ_Alpaca
type: alpaca
split: validation
dataset_prepared_path:
output_dir: ./outputs/qlora-out
adapter: qlora
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 64 #Before it was 16
lora_dropout: 0.05
lora_target_modules: #Before it was empty
- q_proj
- k_proj
- v_proj
- o_proj
- gate_proj
- up_proj
- down_proj
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: llama-3.1-8b-squadv2_SciQ_e2_v2
wandb_entity:
wandb_watch:
wandb_name: llama-3.1-8b-squadv2-v0_SciQ_e2_v2
wandb_log_model:
hub_model_id: ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_E2_V2
gradient_accumulation_steps: 4
micro_batch_size: 4
num_epochs: 2
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: true #Before it was false
bf16: auto
tf32: false
gradient_checkpointing: true
flash_attention: true
warmup_steps: 50 #Before it was 10
evals_per_epoch: 4
saves_per_epoch: 1
weight_decay: 0.0
special_tokens:
pad_token: "<|end_of_text|>"
```
</details><br>
# llama-3.1-8b-squadv2_SciQ_E2_V2
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8990
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0029 | 1 | 2.2993 |
| 0.8102 | 0.2504 | 85 | 0.9110 |
| 0.8141 | 0.5007 | 170 | 0.8933 |
| 0.8189 | 0.7511 | 255 | 0.8846 |
| 0.8188 | 1.0015 | 340 | 0.8763 |
| 0.6354 | 1.2496 | 425 | 0.9022 |
| 0.6568 | 1.5 | 510 | 0.9029 |
| 0.639 | 1.7504 | 595 | 0.8990 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.45.2
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
pwde/tuiche-ceshi | pwde | 2025-05-26T08:52:54Z | 0 | 0 | null | [
"license:other",
"region:us"
]
| null | 2025-05-26T08:01:47Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_e1_v3 | ahmedelgebaly | 2025-05-26T08:51:59Z | 16 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"license:llama3",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-04-27T20:13:43Z | ---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: llama-3.1-8b-squadv2_SciQ_e1_v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: meta-llama/Meta-Llama-3.1-8B
peft_model: ahmedelgebaly/llama-3.1-8b-squadv2
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: ahmedelgebaly/SciQ_Alpaca
type: alpaca
split: train
- path: ahmedelgebaly/SQuad_2_Alpaca
type: alpaca
split: train
percentage: 0.1 # small replay buffer to avoid forgetting
test_datasets:
- path: ahmedelgebaly/SciQ_Alpaca
type: alpaca
split: validation
dataset_prepared_path:
output_dir: ./outputs/qlora-out-mixed
adapter: qlora
lora_model_dir:
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
- gate_proj
- up_proj
- down_proj
#lora_target_linear: true ##Removed for Explicit Control after adding modules
lora_fan_in_fan_out:
wandb_project: llama-3.1-8b-squadv2_SciQ_e1_v3
wandb_entity:
wandb_watch:
wandb_name: llama-3.1-8b-squadv2-v0_SciQ_e1_v2
wandb_log_model:
hub_model_id: ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_e1_v3
gradient_accumulation_steps: 4
micro_batch_size: 4
num_epochs: 1
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.0001 ##Reducing from 0.0002 to 0.0001
lr_scheduler_warmup_ratio: 0.1 ##Added for better convergence very low for SciQ (only ~11k samples
train_on_inputs: false
group_by_length: true
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
logging_steps: 1
xformers_attention:
flash_attention: true
evals_per_epoch: 4
saves_per_epoch: 1
log_eval_metric: true
weight_decay: 0.0
special_tokens:
pad_token: "<|end_of_text|>"
```
</details><br>
# llama-3.1-8b-squadv2_SciQ_e1_v3
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8983
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5496 | 0.0029 | 1 | 1.8420 |
| 0.9073 | 0.2504 | 85 | 0.9289 |
| 0.9158 | 0.5007 | 170 | 0.9048 |
| 0.8754 | 0.7511 | 255 | 0.8983 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.45.2
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Pretrain-FBK-NLP/mt5-large_AllDataSourcesClinical_0.0002_constant_1024_paper | Pretrain-FBK-NLP | 2025-05-26T08:51:22Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2025-05-13T23:13:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_e3 | ahmedelgebaly | 2025-05-26T08:50:39Z | 14 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"license:llama3",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-04-25T14:03:05Z | ---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: llama-3.1-8b-squadv2_SciQ_e3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: meta-llama/Meta-Llama-3.1-8B # same model you originally used
# Load your previously fine-tuned model as a PEFT adapter
peft_model: ahmedelgebaly/llama-3.1-8b-squadv2_e3
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: ahmedelgebaly/SciQ_Alpaca
type: alpaca
split: train
test_datasets:
- path: ahmedelgebaly/SciQ_Alpaca
type: alpaca
split: validation
dataset_prepared_path:
output_dir: ./outputs/qlora-out
adapter: qlora
lora_model_dir:
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: llama-3.1-8b-squadv2_SciQ_e3
wandb_entity:
wandb_watch:
wandb_name: llama-3.1-8b-squadv2-v0_SciQ_e3
wandb_log_model:
hub_model_id: ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_e3
gradient_accumulation_steps: 4
micro_batch_size: 4
num_epochs: 3
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
pad_token: "<|end_of_text|>"
```
</details><br>
# llama-3.1-8b-squadv2_SciQ_e3
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8935
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7866 | 0.0305 | 1 | 1.8420 |
| 1.1314 | 0.2443 | 8 | 1.0979 |
| 0.8408 | 0.4885 | 16 | 0.9646 |
| 0.8669 | 0.7328 | 24 | 0.9339 |
| 0.8588 | 0.9771 | 32 | 0.9197 |
| 0.8363 | 1.2137 | 40 | 0.9090 |
| 0.8021 | 1.4580 | 48 | 0.9028 |
| 0.833 | 1.7023 | 56 | 0.8995 |
| 0.8083 | 1.9466 | 64 | 0.8951 |
| 0.8215 | 2.1832 | 72 | 0.8948 |
| 0.824 | 2.4275 | 80 | 0.8945 |
| 0.802 | 2.6718 | 88 | 0.8936 |
| 0.7762 | 2.9160 | 96 | 0.8935 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.45.2
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Mihaj/whisper-medium-karelian-cs-w-rus | Mihaj | 2025-05-26T08:49:41Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-20T09:38:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_e2 | ahmedelgebaly | 2025-05-26T08:48:32Z | 16 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"license:llama3",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-04-25T14:02:55Z | ---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: llama-3.1-8b-squadv2_SciQ_e2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: meta-llama/Meta-Llama-3.1-8B # same model you originally used
# Load your previously fine-tuned model as a PEFT adapter
peft_model: ahmedelgebaly/llama-3.1-8b-squadv2_e2
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: ahmedelgebaly/SciQ_Alpaca
type: alpaca
split: train
test_datasets:
- path: ahmedelgebaly/SciQ_Alpaca
type: alpaca
split: validation
dataset_prepared_path:
output_dir: ./outputs/qlora-out
adapter: qlora
lora_model_dir:
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: llama-3.1-8b-squadv2_SciQ_e2
wandb_entity:
wandb_watch:
wandb_name: llama-3.1-8b-squadv2-v0_SciQ_e2
wandb_log_model:
hub_model_id: ahmedelgebaly/llama-3.1-8b-squadv2_SciQ_e2
gradient_accumulation_steps: 4
micro_batch_size: 4
num_epochs: 2
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
pad_token: "<|end_of_text|>"
```
</details><br>
# llama-3.1-8b-squadv2_SciQ_e2
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9066
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7866 | 0.0305 | 1 | 1.8420 |
| 1.1295 | 0.2443 | 8 | 1.0980 |
| 0.8408 | 0.4885 | 16 | 0.9650 |
| 0.8677 | 0.7328 | 24 | 0.9346 |
| 0.8605 | 0.9771 | 32 | 0.9223 |
| 0.8401 | 1.2137 | 40 | 0.9130 |
| 0.8089 | 1.4580 | 48 | 0.9084 |
| 0.8434 | 1.7023 | 56 | 0.9068 |
| 0.8224 | 1.9466 | 64 | 0.9066 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.45.2
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
igzi/MNLP_document_encoder-finetuned | igzi | 2025-05-26T08:48:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2025-05-26T08:48:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AK2042/Phishing_Website_detector | AK2042 | 2025-05-26T08:46:06Z | 0 | 0 | sklearn | [
"sklearn",
"en",
"license:mit",
"region:us"
]
| null | 2025-05-26T07:23:06Z | ---
license: mit
language:
- en
library_name: sklearn
---
# Phishing Website Detection using Machine Learning & SSL Certificate Analysis
This project is a machine learning-based web application to detect phishing websites using both URL-based features and SSL certificate metadata. It uses a trained model and provides an easy-to-use **Gradio interface** to check whether a given link is **legitimate** or **phishing**.
---
## Features
* Accepts a raw URL as input
* Uses lexical URL features + SSL certificate metadata
* Extracts SSL features like issuer, validity period, and self-signed status
* Trained ML model (Random Forest / XGBoost / etc.) saved as a `.pkl` file
* Gradio web interface (no backend deployment needed)
* Fast and lightweight prediction
* Built using Kaggle-curated phishing URL dataset
---
## Project Structure
```
phishing-detector/
β
βββ model/
β βββ phishing_model.pkl # Trained ML model
β
βββ app.py # Main Gradio app
βββ feature_extraction.py # Lexical feature extractor for URLs
βββ train_model.py # (Optional) Script to retrain model
β
βββ README.md # You are here!
βββ requirements.txt # Python dependencies
```
---
## How It Works
1. User inputs a URL.
2. `feature_extraction.py` extracts URL-based features (length, special chars, etc.).
3. Features are fed into a trained ML model (`phishing_model.pkl`).
4. Output shown on Gradio UI: **Legit** or **Phishing**
---
## Setup & Run
### 1. Clone the Repository
```bash
git clone https://github.com/AK2042/Phishing_Website_detector.git
cd phishing-detector
```
### 2. Install Dependencies
```bash
pip install -r requirements.txt
```
### 3. Run the App
```bash
python app.py
```
Gradio will open the app in your browser at `http://127.0.0.1:7860`.
---
## Model Training (Optional)
To retrain the model with new data:
```bash
python train_model.py
```
This will generate a new `phishing_model.pkl`.
link to dataset: https://www.kaggle.com/datasets/eswarchandt/phishing-website-detector
## Dependencies
* `scikit-learn`
* `gradio`
* `OpenSSL`
* `tldextract`
* `pandas`, `numpy`
---
## References
* [PhishTank Dataset](https://www.phishtank.com/)
* [Kaggle Phishing URLs Dataset](https://www.kaggle.com/datasets)
* [Gradio Docs](https://gradio.app/)
---
## License
MIT License. Use freely with credit. |
FormlessAI/cf2da658-1b17-4700-b77f-d3e98017d67c | FormlessAI | 2025-05-26T08:41:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:openlm-research/open_llama_3b",
"base_model:finetune:openlm-research/open_llama_3b",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-26T07:20:10Z | ---
base_model: openlm-research/open_llama_3b
library_name: transformers
model_name: cf2da658-1b17-4700-b77f-d3e98017d67c
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for cf2da658-1b17-4700-b77f-d3e98017d67c
This model is a fine-tuned version of [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/cf2da658-1b17-4700-b77f-d3e98017d67c", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/tq3iv1h1)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.52.3
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
green19d25y/Qwen2-32m-hf | green19d25y | 2025-05-26T06:28:07Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"text-generation",
"en",
"license:mit",
"region:us"
]
| text-generation | 2025-05-26T06:10:38Z | ---
license: mit
language:
- en
pipeline_tag: text-generation
---
# Qwen2 HF model (32M Parameters)
This is a **Qwen2 architecture model** trained **completely from scratch** with **32 million parameters**. It uses a custom tokenizer and vocabulary, and is designed for experimentation with compact, task-specific language models.
## Training Details
- **Architecture**: Qwen2
- **Parameters**: 32M
- **Training from scratch**: Yes
- **Pretrained base**: None
- **Tokenizer**: ByteLevelBPETokenizer
- **Vocabulary size**: 5K tokens
- **Language**: English only
- **Dataset**: [Shakespeare's Complete Works](https://www.gutenberg.org/ebooks/100)
## Purpose
To check if the Qwen2 works well with small amount of data. It somewhat works, but I believe I need to fine-tune it and perform additional steps to make it more accurate.
## Intended Use
- Small-scale research
- Testing text generation on limited data
- Fine-grained experimentation with custom language models
- Educational purposes
## Limitations
- Not general-purpose
- Limited vocabulary and context length
- Struggles outside its trained domain
- English-only
- Not production-ready
## Inference Example
```python
from transformers import Qwen2ForCausalLM, Qwen2Tokenizer
model = Qwen2ForCausalLM.from_pretrained("green19d25y/Qwen2-32m-hf")
tokenizer = Qwen2Tokenizer.from_pretrained("green19d25y/Qwen2-32m-hf")
prompt = "He had need mean better than his"
input_ids = tokenizer.encode(prompt, return_tensors="pt")
output = model.generate(
input_ids,
max_length=100,
num_return_sequences=1,
do_sample=True,
temperature=0.7
)
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
``` |
Lonz1no/Qwen3_Rude_RAG | Lonz1no | 2025-05-26T06:27:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-26T06:27:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dhruvsangani/Sentiment-Analysis-GGUF | dhruvsangani | 2025-05-26T06:25:53Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-05-26T06:04:41Z | ---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** dhruvsangani
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Yuichi1218/Lafeak-llama3-chatvector-05261128 | Yuichi1218 | 2025-05-26T06:23:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2025-05-26T06:15:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Yntec/Luminous | Yntec | 2025-05-26T06:21:11Z | 42 | 1 | diffusers | [
"diffusers",
"safetensors",
"General purpose",
"3D",
"Person",
"Colorful",
"Stylized",
"Artstyle",
"Patchmonk",
"sadxzero",
"stable-diffusion",
"stable-diffusion-1.5",
"stable-diffusion-diffusers",
"text-to-image",
"base_model:digiplay/SXZ_Luma_v0.98VAE",
"base_model:finetune:digiplay/SXZ_Luma_v0.98VAE",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2025-05-06T06:22:41Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- General purpose
- 3D
- Person
- Colorful
- Stylized
- Artstyle
- Patchmonk
- sadxzero
- stable-diffusion
- stable-diffusion-1.5
- stable-diffusion-diffusers
- diffusers
- text-to-image
base_model:
- digiplay/SXZ_Luma_v0.98VAE
---
# Luminous
LusciousMix V2.5 merged with the SXZ Luma 0.98 model to maximize their creativity! Samples and prompts (all use seed 9119):

(masterpiece), best quality, high resolution, highly detailed, detailed background, perfect lighting, outdoor, 1girl, petite, short hair, pink hair, blunt bangs, t-shirt, short skirt

photo of an extremely beautiful young girl with blonde hair, ultra realistic blue eyes by annie leibovitz, sundress. hyperdetailed digital concept art trending in pinterest Artstation WLOP 3 point lighting cinematic highlights stunning quality 8k oil on canvas shaded flat illustration for fashion photoshoot

cute shot of redhead pirate young girl, long green coat, sea, storm, dark atmosphere, volumetric lighting, teal eyes, glad to see, best quality, masterpiece, chromatic aberration, realistic

cute lady in superman costume flying in sky, short black hair, cape, eyes, arms up, storm, dark clouds, lightning, night, lightning, rain, particles
Original pages:
https://civitai.com/models/25831?modelVersionId=68200 (Luma 0.98)
https://civitai.com/models/24354?modelVersionId=188775 (LusciousMix 2.5)
Recipe:
# Recipe:
- SuperMerger Weight sum Use MBW 1,1,1,0,1,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,0,0,0,1,1,1
Model A:
Luscious 2.5
Model B:
Luma 0.98VAE
Output Model:
Luminous |
ViRAL-Nimra-Mehra-Video-Leaks/Original.Full.Clip.Nimra.Mehra.Viral.Video.Link.Official | ViRAL-Nimra-Mehra-Video-Leaks | 2025-05-26T06:19:33Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-26T06:19:20Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Cloudmaster/Llama-3.2-3B-8bit-gptq-attention | Cloudmaster | 2025-05-26T06:18:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"gptq",
"region:us"
]
| text-generation | 2025-05-26T06:15:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Nerva1228/kxnainai1 | Nerva1228 | 2025-05-26T06:17:29Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-26T02:18:36Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: kxnainai1
---
# Kxnainai1
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `kxnainai1` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "kxnainai1",
"lora_weights": "https://huggingface.co/Nerva1228/kxnainai1/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Nerva1228/kxnainai1', weight_name='lora.safetensors')
image = pipeline('kxnainai1').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Nerva1228/kxnainai1/discussions) to add images that show off what youβve made with this LoRA.
|
lefantom00/Llama-3.1-8B-it-iSMART-v2 | lefantom00 | 2025-05-26T06:15:30Z | 0 | 0 | null | [
"safetensors",
"llama",
"vi",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-26T04:42:15Z | ---
license: apache-2.0
language:
- vi
base_model:
- meta-llama/Llama-3.1-8B-Instruct
--- |
rendoo/06_rendoo_05_972 | rendoo | 2025-05-26T06:07:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"BifrΓΆst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-26T05:57:45Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, youβre required to review and agree to
Googleβs usage license. To do this, please ensure youβre logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- BifrΓΆst
- Bifrost
- code
---
## BifrΓΆst-27B

BifrΓΆst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, BifrΓΆst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** BifrΓΆst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
BifrΓΆst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- BifrΓΆst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use. |
enosislabs/midnight-mini-high-exp-gguf | enosislabs | 2025-05-26T06:05:49Z | 82 | 1 | transformers | [
"transformers",
"gguf",
"llama",
"llama-3.2-3b",
"unsloth",
"midnight-ai",
"enosis-labs",
"text-generation",
"summarization",
"mathematics",
"psychology",
"fine-tuned",
"efficient",
"daily-use",
"trl",
"text-generation-inference",
"en",
"dataset:enosislabs/deepsearch-llama-finetune",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:quantized:meta-llama/Llama-3.2-3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2025-05-23T01:18:48Z | ---
license: apache-2.0
language:
- en
tags:
- llama
- llama-3.2-3b
- unsloth
- midnight-ai
- enosis-labs
- text-generation
- summarization
- mathematics
- psychology
- fine-tuned
- efficient
- daily-use
- trl
- text-generation-inference
- transformers
pipeline_tag: text-generation
model_name: Midnight Mini Standard
model_id: enosislabs/midnight-mini-high-exp
base_model: meta-llama/Llama-3.2-3B
datasets:
- enosislabs/deepsearch-llama-finetune
library_name: transformers
---
# Midnight Mini Standard: Efficient Daily AI Companion
**Model ID:** `enosislabs/midnight-mini-high-exp`
**Developed by:** Enosis Labs AI Research Division
**Base Architecture:** Llama-3.2-3B
**License:** Apache-2.0
## Executive Summary
Midnight Mini Standard represents our commitment to democratizing AI through efficient, practical solutions for everyday use. Built upon the robust Llama-3.2-3B foundation, this 3-billion parameter model is specifically optimized for daily productivity tasks, delivering exceptional performance in text summarization, basic mathematics, psychology-oriented interactions, and rapid response generation while maintaining minimal computational requirements.
## Technical Specifications
### Core Architecture
- **Base Model:** meta-llama/Llama-3.2-3B
- **Parameter Count:** 3.21 billion trainable parameters
- **Model Type:** Autoregressive Transformer (Causal Language Model)
- **Fine-tuning Framework:** Unsloth optimization pipeline with TRL integration
- **Quantization Support:** Native 16-bit precision, GGUF quantized variants (Q4_K_M, Q5_K_M, Q8_0)
- **Maximum Context Length:** 131,072 tokens (extended context)
- **Vocabulary Size:** 128,256 tokens
- **Attention Heads:** 24 (Multi-Head Attention)
- **Hidden Dimensions:** 2,048
- **Feed-Forward Network Dimensions:** 8,192
### Performance Characteristics
The model architecture emphasizes efficiency and practical utility:
- **Optimized Inference Speed:** Specialized for rapid response generation in conversational scenarios
- **Memory Efficient Design:** Reduced memory footprint for deployment on consumer hardware
- **Context-Aware Processing:** Enhanced short-term memory for maintaining conversation flow
- **Task-Specific Optimization:** Fine-tuned attention patterns for summarization and mathematical reasoning
### Deployment Formats
#### 16-bit Precision Model
- **Memory Requirements:** ~6.5GB VRAM (inference)
- **Inference Speed:** ~200-250 tokens/second (RTX 4070)
- **Precision:** Full fp16 precision for optimal accuracy
#### GGUF Quantized Variants
- **Q4_K_M:** 2.1GB, optimal for CPU inference and edge deployment
- **Q5_K_M:** 2.6GB, enhanced quality with efficient compression
- **Q8_0:** 3.4GB, near-original quality for high-performance applications
## Core Capabilities & Optimization Focus
Midnight Mini Standard is engineered for practical, everyday AI assistance with specialized capabilities:
### Primary Strengths
- **Rapid Response Generation:** Optimized for quick, coherent responses in conversational contexts
- **Text Summarization Excellence:** Superior performance in condensing complex documents and articles
- **Basic Mathematical Proficiency:** Reliable arithmetic, algebra, and fundamental mathematical operations
- **Psychology-Informed Interactions:** Enhanced understanding of emotional context and supportive communication
- **Daily Productivity Support:** Streamlined assistance for common tasks like email drafting, note-taking, and planning
### Design Philosophy
- **Efficiency First:** Maximized performance per computational unit for practical deployment
- **User-Centric Design:** Optimized for natural, helpful interactions in daily scenarios
- **Accessibility Focus:** Designed to run efficiently on consumer-grade hardware
- **Reliability:** Consistent, dependable outputs for routine tasks
## Specialized Applications & Use Cases
Midnight Mini Standard excels in practical, everyday scenarios:
### Primary Application Domains
- **Personal Productivity:** Email composition, document summarization, meeting notes, and task planning
- **Educational Support:** Homework assistance, concept explanation, and basic tutoring across subjects
- **Content Creation:** Blog post drafts, social media content, and creative writing assistance
- **Psychology & Wellness:** Supportive conversations, mood tracking insights, and mental health resource guidance
- **Business Communication:** Professional correspondence, report summarization, and presentation assistance
### Implementation Examples
#### Text Summarization Implementation
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Initialize model for summarization tasks
model_id = "enosislabs/midnight-mini-standard"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto"
)
# Document summarization example
document = """[Long article or document text here]"""
prompt = f"""Please provide a concise summary of the following text, highlighting the key points:
{document}
Summary:"""
inputs = tokenizer(prompt, return_tensors="pt", truncation=True, max_length=4096)
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=200,
temperature=0.3,
do_sample=True,
top_p=0.9,
repetition_penalty=1.1
)
summary = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(f"Summary:\n{summary}")
```
#### Psychology-Informed Interaction
```python
# Supportive conversation example
support_prompt = """I'm feeling overwhelmed with my workload and struggling to stay motivated.
Can you help me develop a strategy to manage this situation?"""
inputs = tokenizer(support_prompt, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=300,
temperature=0.6,
do_sample=True,
top_p=0.85
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(f"Supportive Response:\n{response}")
```
#### Basic Mathematics Assistance
```python
# Mathematical problem solving
math_prompt = """Solve this step by step:
If a recipe calls for 2.5 cups of flour to make 12 cookies,
how much flour is needed to make 30 cookies?"""
inputs = tokenizer(math_prompt, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=150,
temperature=0.2,
do_sample=True
)
solution = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(f"Mathematical Solution:\n{solution}")
```
## Training Methodology & Data Engineering
### Training Infrastructure
- **Base Model:** meta-llama/Llama-3.2-3B (Meta AI)
- **Fine-tuning Framework:** Unsloth optimization with TRL (Transformer Reinforcement Learning)
- **Hardware Configuration:** Multi-GPU training environment (RTX 4090 clusters)
- **Training Duration:** 48 hours of efficient training with optimized data pipeline
- **Optimization Strategy:** Parameter-efficient fine-tuning with focus on practical task performance
### Dataset Composition & Curation
Training incorporates the proprietary `enosislabs/deepsearch-llama-finetune` dataset:
- **Conversational Data:** Natural dialogue patterns optimized for daily interaction scenarios
- **Summarization Corpus:** Diverse documents, articles, and texts with high-quality summaries
- **Mathematical Problem Sets:** Basic to intermediate mathematical problems with step-by-step solutions
- **Psychology Resources:** Mental health support conversations and emotional intelligence training data
- **Productivity Content:** Email templates, professional communication, and task management examples
### Training Optimization Techniques
- **Efficient Fine-tuning:** Leveraging Unsloth's optimized training pipeline for reduced training time
- **Task-Specific Adaptation:** Specialized training loops for different capability areas
- **Response Quality Enhancement:** Reinforcement learning from human feedback (RLHF) integration
- **Conversational Flow Optimization:** Training for natural, engaging dialogue patterns
## Performance Benchmarks & Evaluation Results
Midnight Mini Standard demonstrates strong performance in practical application scenarios:
### Benchmark Results Overview
| Capability Area | Task Specification | Metric | Score | Performance Notes |
|:----------------|:-------------------|:-------|:------|:------------------|
| **Text Summarization** | | | | |
| | News Article Summarization | ROUGE-L | 0.485 | Excellent content preservation |
| | Document Condensation | Compression Ratio | 4.2:1 | Optimal information density |
| **Mathematical Reasoning** | | | | |
| | Basic Arithmetic | Accuracy | 0.942 | Reliable for daily calculations |
| | Word Problems | Success Rate | 0.876 | Strong practical problem solving |
| **Conversational Quality** | | | | |
| | Response Relevance | Human Rating | 4.3/5 | Highly contextual responses |
| | Helpfulness Score | User Evaluation | 4.5/5 | Excellent practical assistance |
| **Psychology Applications** | | | | |
| | Emotional Recognition | F1-Score | 0.821 | Strong emotional intelligence |
| | Supportive Response Quality | Expert Rating | 4.2/5 | Appropriate therapeutic communication |
### Performance Analysis
**Summarization Excellence:** Achieves industry-leading performance in text summarization with optimal balance between brevity and information retention, making it ideal for processing news, reports, and documentation.
**Mathematical Reliability:** Demonstrates consistent accuracy in basic mathematical operations and word problems, providing reliable assistance for everyday computational needs.
**Conversational Quality:** High user satisfaction ratings indicate natural, helpful interactions that feel genuinely supportive and contextually appropriate.
**Psychology Applications:** Strong emotional recognition capabilities enable empathetic responses suitable for mental health support and wellness applications.
## Model Limitations & Considerations
### Technical Constraints
- **Knowledge Boundary:** Training data limited to cutoff date; requires external sources for current information
- **Mathematical Scope:** Optimized for basic to intermediate mathematics; complex theoretical problems may require specialized models
- **Context Limitations:** While extended to 131K tokens, extremely long documents may need segmentation
- **Language Focus:** Primarily optimized for English with limited multilingual capabilities
### Performance Considerations
- **Specialized Domain Accuracy:** General-purpose design may require domain-specific validation for specialized fields
- **Creative Writing Limitations:** Optimized for practical tasks rather than advanced creative or artistic applications
- **Technical Depth:** Designed for daily use rather than deep technical or research applications
- **Real-time Information:** Cannot access current events or real-time data without external integration
### Ethical & Safety Considerations
- **Psychology Applications:** Not a replacement for professional mental health care; should supplement, not substitute, professional support
- **Bias Awareness:** May reflect training data biases; requires ongoing monitoring in sensitive applications
- **Decision Making:** Intended as an assistant tool; important decisions should involve human judgment
- **Privacy Protection:** No data retention during inference; user conversations are not stored
## Responsible AI Implementation
### Safety Mechanisms
- **Content Filtering:** Integrated safety measures to prevent harmful or inappropriate content generation
- **Emotional Sensitivity:** Training for appropriate responses in sensitive or emotional contexts
- **Professional Boundaries:** Clear limitations in psychology applications to prevent overstepping therapeutic boundaries
- **User Guidance:** Transparent communication about model capabilities and limitations
### Best Practices for Deployment
- **Supervised Implementation:** Recommend human oversight for critical applications
- **User Education:** Clear communication about model strengths and limitations
- **Feedback Integration:** Continuous improvement through user feedback and performance monitoring
- **Ethical Guidelines:** Adherence to responsible AI principles in all applications
## Technical Support & Resources
### Model Attribution
When utilizing Midnight Mini Standard in applications or research, please cite:
```bibtex
@software{midnight_mini_standard_2025,
author = {Enosis Labs AI Research Division},
title = {Midnight Mini Standard: Efficient Daily AI Companion},
year = {2025},
publisher = {Enosis Labs},
url = {https://huggingface.co/enosislabs/midnight-mini-standard},
note = {3B parameter Llama-based model optimized for daily productivity and practical applications}
}
```
### Support Channels
For technical support, implementation guidance, or collaboration opportunities:
- **Primary Contact:** <[email protected]>
- **Model Repository:** [Hugging Face Model Hub](https://huggingface.co/enosislabs/midnight-mini-high-exp)
### License & Distribution
Licensed under Apache 2.0, enabling broad commercial and personal use with proper attribution. The model is designed for accessibility and widespread adoption in practical AI applications.
---
**Enosis Labs AI Research Division**
*Making advanced AI accessible for everyday life*
|
enosislabs/midnight-mini-high-exp | enosislabs | 2025-05-26T06:05:25Z | 20 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"llama-3.2-3b",
"unsloth",
"midnight-ai",
"enosis-labs",
"summarization",
"mathematics",
"psychology",
"fine-tuned",
"efficient",
"daily-use",
"trl",
"text-generation-inference",
"conversational",
"en",
"dataset:enosislabs/deepsearch-llama-finetune",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:finetune:meta-llama/Llama-3.2-3B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-22T23:56:29Z | ---
license: apache-2.0
language:
- en
tags:
- llama
- llama-3.2-3b
- unsloth
- midnight-ai
- enosis-labs
- text-generation
- summarization
- mathematics
- psychology
- fine-tuned
- efficient
- daily-use
- trl
- text-generation-inference
- transformers
pipeline_tag: text-generation
model_name: Midnight Mini Standard
model_id: enosislabs/midnight-mini-high-exp
base_model: meta-llama/Llama-3.2-3B
datasets:
- enosislabs/deepsearch-llama-finetune
library_name: transformers
---
# Midnight Mini Standard: Efficient Daily AI Companion
**Model ID:** `enosislabs/midnight-mini-high-exp`
**Developed by:** Enosis Labs AI Research Division
**Base Architecture:** Llama-3.2-3B
**License:** Apache-2.0
## Executive Summary
Midnight Mini Standard represents our commitment to democratizing AI through efficient, practical solutions for everyday use. Built upon the robust Llama-3.2-3B foundation, this 3-billion parameter model is specifically optimized for daily productivity tasks, delivering exceptional performance in text summarization, basic mathematics, psychology-oriented interactions, and rapid response generation while maintaining minimal computational requirements.
## Technical Specifications
### Core Architecture
- **Base Model:** meta-llama/Llama-3.2-3B
- **Parameter Count:** 3.21 billion trainable parameters
- **Model Type:** Autoregressive Transformer (Causal Language Model)
- **Fine-tuning Framework:** Unsloth optimization pipeline with TRL integration
- **Quantization Support:** Native 16-bit precision, GGUF quantized variants (Q4_K_M, Q5_K_M, Q8_0)
- **Maximum Context Length:** 131,072 tokens (extended context)
- **Vocabulary Size:** 128,256 tokens
- **Attention Heads:** 24 (Multi-Head Attention)
- **Hidden Dimensions:** 2,048
- **Feed-Forward Network Dimensions:** 8,192
### Performance Characteristics
The model architecture emphasizes efficiency and practical utility:
- **Optimized Inference Speed:** Specialized for rapid response generation in conversational scenarios
- **Memory Efficient Design:** Reduced memory footprint for deployment on consumer hardware
- **Context-Aware Processing:** Enhanced short-term memory for maintaining conversation flow
- **Task-Specific Optimization:** Fine-tuned attention patterns for summarization and mathematical reasoning
### Deployment Formats
#### 16-bit Precision Model
- **Memory Requirements:** ~6.5GB VRAM (inference)
- **Inference Speed:** ~200-250 tokens/second (RTX 4070)
- **Precision:** Full fp16 precision for optimal accuracy
#### GGUF Quantized Variants
- **Q4_K_M:** 2.1GB, optimal for CPU inference and edge deployment
- **Q5_K_M:** 2.6GB, enhanced quality with efficient compression
- **Q8_0:** 3.4GB, near-original quality for high-performance applications
## Core Capabilities & Optimization Focus
Midnight Mini Standard is engineered for practical, everyday AI assistance with specialized capabilities:
### Primary Strengths
- **Rapid Response Generation:** Optimized for quick, coherent responses in conversational contexts
- **Text Summarization Excellence:** Superior performance in condensing complex documents and articles
- **Basic Mathematical Proficiency:** Reliable arithmetic, algebra, and fundamental mathematical operations
- **Psychology-Informed Interactions:** Enhanced understanding of emotional context and supportive communication
- **Daily Productivity Support:** Streamlined assistance for common tasks like email drafting, note-taking, and planning
### Design Philosophy
- **Efficiency First:** Maximized performance per computational unit for practical deployment
- **User-Centric Design:** Optimized for natural, helpful interactions in daily scenarios
- **Accessibility Focus:** Designed to run efficiently on consumer-grade hardware
- **Reliability:** Consistent, dependable outputs for routine tasks
## Specialized Applications & Use Cases
Midnight Mini Standard excels in practical, everyday scenarios:
### Primary Application Domains
- **Personal Productivity:** Email composition, document summarization, meeting notes, and task planning
- **Educational Support:** Homework assistance, concept explanation, and basic tutoring across subjects
- **Content Creation:** Blog post drafts, social media content, and creative writing assistance
- **Psychology & Wellness:** Supportive conversations, mood tracking insights, and mental health resource guidance
- **Business Communication:** Professional correspondence, report summarization, and presentation assistance
### Implementation Examples
#### Text Summarization Implementation
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Initialize model for summarization tasks
model_id = "enosislabs/midnight-mini-standard"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto"
)
# Document summarization example
document = """[Long article or document text here]"""
prompt = f"""Please provide a concise summary of the following text, highlighting the key points:
{document}
Summary:"""
inputs = tokenizer(prompt, return_tensors="pt", truncation=True, max_length=4096)
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=200,
temperature=0.3,
do_sample=True,
top_p=0.9,
repetition_penalty=1.1
)
summary = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(f"Summary:\n{summary}")
```
#### Psychology-Informed Interaction
```python
# Supportive conversation example
support_prompt = """I'm feeling overwhelmed with my workload and struggling to stay motivated.
Can you help me develop a strategy to manage this situation?"""
inputs = tokenizer(support_prompt, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=300,
temperature=0.6,
do_sample=True,
top_p=0.85
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(f"Supportive Response:\n{response}")
```
#### Basic Mathematics Assistance
```python
# Mathematical problem solving
math_prompt = """Solve this step by step:
If a recipe calls for 2.5 cups of flour to make 12 cookies,
how much flour is needed to make 30 cookies?"""
inputs = tokenizer(math_prompt, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=150,
temperature=0.2,
do_sample=True
)
solution = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(f"Mathematical Solution:\n{solution}")
```
## Training Methodology & Data Engineering
### Training Infrastructure
- **Base Model:** meta-llama/Llama-3.2-3B (Meta AI)
- **Fine-tuning Framework:** Unsloth optimization with TRL (Transformer Reinforcement Learning)
- **Hardware Configuration:** Multi-GPU training environment (RTX 4090 clusters)
- **Training Duration:** 48 hours of efficient training with optimized data pipeline
- **Optimization Strategy:** Parameter-efficient fine-tuning with focus on practical task performance
### Dataset Composition & Curation
Training incorporates the proprietary `enosislabs/deepsearch-llama-finetune` dataset:
- **Conversational Data:** Natural dialogue patterns optimized for daily interaction scenarios
- **Summarization Corpus:** Diverse documents, articles, and texts with high-quality summaries
- **Mathematical Problem Sets:** Basic to intermediate mathematical problems with step-by-step solutions
- **Psychology Resources:** Mental health support conversations and emotional intelligence training data
- **Productivity Content:** Email templates, professional communication, and task management examples
### Training Optimization Techniques
- **Efficient Fine-tuning:** Leveraging Unsloth's optimized training pipeline for reduced training time
- **Task-Specific Adaptation:** Specialized training loops for different capability areas
- **Response Quality Enhancement:** Reinforcement learning from human feedback (RLHF) integration
- **Conversational Flow Optimization:** Training for natural, engaging dialogue patterns
## Performance Benchmarks & Evaluation Results
Midnight Mini Standard demonstrates strong performance in practical application scenarios:
### Benchmark Results Overview
| Capability Area | Task Specification | Metric | Score | Performance Notes |
|:----------------|:-------------------|:-------|:------|:------------------|
| **Text Summarization** | | | | |
| | News Article Summarization | ROUGE-L | 0.485 | Excellent content preservation |
| | Document Condensation | Compression Ratio | 4.2:1 | Optimal information density |
| **Mathematical Reasoning** | | | | |
| | Basic Arithmetic | Accuracy | 0.942 | Reliable for daily calculations |
| | Word Problems | Success Rate | 0.876 | Strong practical problem solving |
| **Conversational Quality** | | | | |
| | Response Relevance | Human Rating | 4.3/5 | Highly contextual responses |
| | Helpfulness Score | User Evaluation | 4.5/5 | Excellent practical assistance |
| **Psychology Applications** | | | | |
| | Emotional Recognition | F1-Score | 0.821 | Strong emotional intelligence |
| | Supportive Response Quality | Expert Rating | 4.2/5 | Appropriate therapeutic communication |
### Performance Analysis
**Summarization Excellence:** Achieves industry-leading performance in text summarization with optimal balance between brevity and information retention, making it ideal for processing news, reports, and documentation.
**Mathematical Reliability:** Demonstrates consistent accuracy in basic mathematical operations and word problems, providing reliable assistance for everyday computational needs.
**Conversational Quality:** High user satisfaction ratings indicate natural, helpful interactions that feel genuinely supportive and contextually appropriate.
**Psychology Applications:** Strong emotional recognition capabilities enable empathetic responses suitable for mental health support and wellness applications.
## Model Limitations & Considerations
### Technical Constraints
- **Knowledge Boundary:** Training data limited to cutoff date; requires external sources for current information
- **Mathematical Scope:** Optimized for basic to intermediate mathematics; complex theoretical problems may require specialized models
- **Context Limitations:** While extended to 131K tokens, extremely long documents may need segmentation
- **Language Focus:** Primarily optimized for English with limited multilingual capabilities
### Performance Considerations
- **Specialized Domain Accuracy:** General-purpose design may require domain-specific validation for specialized fields
- **Creative Writing Limitations:** Optimized for practical tasks rather than advanced creative or artistic applications
- **Technical Depth:** Designed for daily use rather than deep technical or research applications
- **Real-time Information:** Cannot access current events or real-time data without external integration
### Ethical & Safety Considerations
- **Psychology Applications:** Not a replacement for professional mental health care; should supplement, not substitute, professional support
- **Bias Awareness:** May reflect training data biases; requires ongoing monitoring in sensitive applications
- **Decision Making:** Intended as an assistant tool; important decisions should involve human judgment
- **Privacy Protection:** No data retention during inference; user conversations are not stored
## Responsible AI Implementation
### Safety Mechanisms
- **Content Filtering:** Integrated safety measures to prevent harmful or inappropriate content generation
- **Emotional Sensitivity:** Training for appropriate responses in sensitive or emotional contexts
- **Professional Boundaries:** Clear limitations in psychology applications to prevent overstepping therapeutic boundaries
- **User Guidance:** Transparent communication about model capabilities and limitations
### Best Practices for Deployment
- **Supervised Implementation:** Recommend human oversight for critical applications
- **User Education:** Clear communication about model strengths and limitations
- **Feedback Integration:** Continuous improvement through user feedback and performance monitoring
- **Ethical Guidelines:** Adherence to responsible AI principles in all applications
## Technical Support & Resources
### Model Attribution
When utilizing Midnight Mini Standard in applications or research, please cite:
```bibtex
@software{midnight_mini_standard_2025,
author = {Enosis Labs AI Research Division},
title = {Midnight Mini Standard: Efficient Daily AI Companion},
year = {2025},
publisher = {Enosis Labs},
url = {https://huggingface.co/enosislabs/midnight-mini-standard},
note = {3B parameter Llama-based model optimized for daily productivity and practical applications}
}
```
### Support Channels
For technical support, implementation guidance, or collaboration opportunities:
- **Primary Contact:** <[email protected]>
- **Model Repository:** [Hugging Face Model Hub](https://huggingface.co/enosislabs/midnight-mini-high-exp)
### License & Distribution
Licensed under Apache 2.0, enabling broad commercial and personal use with proper attribution. The model is designed for accessibility and widespread adoption in practical AI applications.
---
**Enosis Labs AI Research Division**
*Making advanced AI accessible for everyday life*
|
Wuhall/xlm-roberta-base-cls | Wuhall | 2025-05-26T06:03:10Z | 0 | 0 | null | [
"safetensors",
"xlm-roberta",
"zh",
"en",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"region:us"
]
| null | 2025-05-26T05:57:23Z | ---
license: mit
language:
- zh
- en
base_model:
- FacebookAI/xlm-roberta-base
---
{"eval_loss": 0.02062925696372986, "eval_accuracy": 0.9971910112359551, "eval_runtime": 9.3475, "eval_samples_per_second": 76.17, "eval_steps_per_second": 4.814, "epoch": 4.0} |
TanAlexanderlz/ALL_RGBCROP_ori16F-8B16F-GACWD1 | TanAlexanderlz | 2025-05-26T06:02:52Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base-finetuned-kinetics",
"base_model:finetune:MCG-NJU/videomae-base-finetuned-kinetics",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
]
| video-classification | 2025-05-26T02:41:43Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base-finetuned-kinetics
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ALL_RGBCROP_ori16F-8B16F-GACWD
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ALL_RGBCROP_ori16F-8B16F-GACWD
This model is a fine-tuned version of [MCG-NJU/videomae-base-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3803
- Accuracy: 0.8144
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1440
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.6146 | 0.0333 | 48 | 0.6323 | 0.6280 |
| 0.3307 | 1.0333 | 96 | 0.4748 | 0.7805 |
| 0.2425 | 2.0333 | 144 | 0.6149 | 0.7805 |
| 0.1629 | 3.0333 | 192 | 0.7126 | 0.7683 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
FireRedTeam/FireRedTTS-1S | FireRedTeam | 2025-05-26T06:02:47Z | 0 | 2 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-04-14T05:34:12Z | ---
license: apache-2.0
---
|
TheDenk/cogvideox-5b-controlnet-hed-v1 | TheDenk | 2025-05-26T06:02:37Z | 12 | 2 | diffusers | [
"diffusers",
"safetensors",
"cogvideox",
"video-generation",
"video-to-video",
"controlnet",
"en",
"license:apache-2.0",
"region:us"
]
| null | 2024-10-23T08:32:21Z | ---
license: apache-2.0
language:
- en
tags:
- cogvideox
- video-generation
- video-to-video
- controlnet
- diffusers
pipeline_tag: video-to-video
---
# CogvideoX-5b Controlnet Extention
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/63fde49f6315a264aba6a7ed/frns--XYMiWf0mBUI0UMK.mp4"></video>
### (Warning) This is raw version of controlnet. Better version will be published soon.
### How to
Clone repo
```bash
git clone https://github.com/TheDenk/cogvideox-controlnet.git
cd cogvideox-controlnet
```
Create venv
```bash
python -m venv venv
source venv/bin/activate
```
Install requirements
```bash
pip install -r requirements.txt
```
### Inference examples
#### Inference with cli
```bash
python -m inference.cli_demo \
--video_path "resources/car.mp4" \
--prompt "The camera follows behind red car. Car is surrounded by a panoramic view of the vast, azure ocean. Seagulls soar overhead, and in the distance, a lighthouse stands sentinel, its beam cutting through the twilight. The scene captures a perfect blend of adventure and serenity, with the car symbolizing freedom on the open sea." \
--controlnet_type "hed" \
--base_model_path THUDM/CogVideoX-5b \
--controlnet_model_path TheDenk/cogvideox-5b-controlnet-hed-v1
```
#### Inference with Gradio
```bash
python -m inference.gradio_web_demo \
--controlnet_type "hed" \
--base_model_path THUDM/CogVideoX-5b \
--controlnet_model_path TheDenk/cogvideox-5b-controlnet-hed-v1
```
## Acknowledgements
Original code and models [CogVideoX](https://github.com/THUDM/CogVideo/tree/main).
## Contacts
<p>Issues should be raised directly in the repository. For professional support and recommendations please <a>[email protected]</a>.</p> |
Video-beanne-valerie-dela-cruz/VIDEO.18.Scandal.Video.beanne.valerie.dela.cruz.telegram.beanne.dela.cruz.video | Video-beanne-valerie-dela-cruz | 2025-05-26T05:59:04Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-26T05:58:02Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
g-assismoraes/gemma-3-4b-it-fpi-alpha2.0-fromit-var-agnews | g-assismoraes | 2025-05-26T05:57:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-05-26T05:53:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TingchenFu/coldrl_3k_qwen-2.5-1.5b_04232202 | TingchenFu | 2025-05-26T05:55:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"en",
"dataset:agentica-org/DeepScaleR-Preview-Dataset",
"arxiv:2505.14810",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-26T03:02:55Z | ---
library_name: transformers
license: apache-2.0
datasets:
- agentica-org/DeepScaleR-Preview-Dataset
language:
- en
base_model:
- Qwen/Qwen2.5-1.5B
---
# Model Card
cold-RL for mathematical reasoning in our MathIF project.
Github Repository: https://github.com/TingchenFu/MathIF
## Training Details
We base our experiments on the DeepScaler dataset, which contains approximately 40k math reasoning samples. The training is conducted using 16 NVIDIA H100 GPUs. For reinforcement learning, we adopt the GRPO framework and use verifiable outcome-based rewards. The model is trained with VeRL framework with most hyper-parameters following the default setting.
## Evaluation
We use nucleus sampling (T=1.0, p=0.95) with a maximum generation length of 16,384 tokens for decoding and vLLM engine for efficient inference.
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@article{fu2025scaling,
title={Scaling Reasoning, Losing Control: Evaluating Instruction Following in Large Reasoning Models},
author={Fu, Tingchen and Gu, Jiawei and Li, Yafu and Qu, Xiaoye and Cheng, Yu},
journal={arXiv preprint arXiv:2505.14810},
year={2025}
}
```
|
beanne-valerie-dela-cruz/Viral-beanne-valerie-dela-cruz-beanne-dela-cruz-viral-video-beanne-valerie-delacruz-telegram | beanne-valerie-dela-cruz | 2025-05-26T05:54:51Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-26T05:50:41Z | Watch β€ β€ β€ <a href="https://ninjavidzzz.com/tjkhffghbjd"> Click Here To link ([Viral]beanne valerie dela cruz beanne dela cruz viral video beanne valerie delacruz telegram)
β€βΊDOWNLOAD β€<a href="https://ninjavidzzz.com/tjkhffghbjd"> Click Here To link ([Viral]beanne valerie dela cruz beanne dela cruz viral video beanne valerie delacruz telegram) |
dhintech/marian-id-en-op | dhintech | 2025-05-26T05:45:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"translation",
"indonesian",
"english",
"fine-tuned",
"meeting-translation",
"real-time",
"optimized",
"id",
"en",
"dataset:ted_talks_iwslt",
"base_model:Helsinki-NLP/opus-mt-id-en",
"base_model:finetune:Helsinki-NLP/opus-mt-id-en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2025-05-26T05:12:22Z | ---
language:
- id
- en
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-id-en
tags:
- translation
- indonesian
- english
- marian
- fine-tuned
- meeting-translation
- real-time
- optimized
pipeline_tag: translation
datasets:
- ted_talks_iwslt
library_name: transformers
metrics:
- bleu
- rouge
widget:
- text: "Selamat pagi, mari kita mulai rapat hari ini."
example_title: "Meeting Start"
- text: "Apakah ada pertanyaan mengenai proposal ini?"
example_title: "Q&A Session"
- text: "Tim marketing akan bertanggung jawab untuk strategi ini."
example_title: "Task Assignment"
- text: "Teknologi artificial intelligence berkembang sangat pesat di Indonesia."
example_title: "Technology Discussion"
- text: "Mari kita diskusikan hasil penelitian dan implementasinya."
example_title: "Research Discussion"
---
# MarianMT Indonesian-English Translation (Optimized for Real-Time Meetings)
This model is an **optimized fine-tuned version** of [Helsinki-NLP/opus-mt-id-en](https://huggingface.co/Helsinki-NLP/opus-mt-id-en) specifically designed for **real-time meeting translation** from Indonesian to English.
## π― Model Highlights
- **Optimized for Speed**: < 1.0s translation time per sentence
- **Meeting-Focused**: Fine-tuned on business and meeting contexts
- **High Performance**: Improved BLEU score compared to base model
- **Production Ready**: Optimized for real-time applications
- **Memory Efficient**: Reduced model complexity without quality loss
## π Performance Metrics
| Metric | Base Model | This Model | Improvement |
|--------|------------|------------|-------------|
| BLEU Score | 0.388 | **0.413** | **+6.4%** |
| Translation Speed | 1.08s | **0.85s** | **21% faster** |
| ROUGE-1 | 0.807 | **0.825** | **+2.2%** |
| Memory Usage | Standard | **Optimized** | **15% reduction** |
## π Model Details
- **Base Model**: Helsinki-NLP/opus-mt-id-en
- **Fine-tuned Dataset**: TED Talks parallel corpus (Indonesian-English)
- **Training Strategy**: Optimized fine-tuning with layer freezing
- **Specialization**: Business meetings, presentations, and formal conversations
- **Training Date**: 2025-05-26
- **Languages**: Indonesian (id) β English (en)
- **License**: Apache 2.0
## βοΈ Training Configuration
### Optimized Hyperparameters
- **Learning Rate**: 5e-6 (ultra-low for stable fine-tuning)
- **Weight Decay**: 0.001 (optimal regularization)
- **Gradient Clipping**: 0.5 (conservative clipping)
- **Dataset Usage**: 30% of full dataset (quality over quantity)
- **Max Sequence Length**: 96 tokens (speed optimized)
- **Training Epochs**: 8
- **Batch Size**: 4 (GPU) / 2 (CPU)
- **Scheduler**: Cosine Annealing with Warm Restarts
### Architecture Optimizations
- **Layer Freezing**: Early encoder layers frozen to preserve base knowledge
- **Parameter Efficiency**: 85-90% of parameters actively trained
- **Memory Optimization**: Gradient accumulation and pin memory
- **Early Stopping**: Patience of 5 epochs to prevent overfitting
## π οΈ Usage
### Basic Usage
```python
from transformers import MarianMTModel, MarianTokenizer
# Load model and tokenizer
model_name = "dhintech/marian-id-en-op"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
# Translate Indonesian to English
def translate(text):
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=96)
outputs = model.generate(
**inputs,
max_length=96,
num_beams=3, # Optimized for speed
early_stopping=True,
do_sample=False
)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
# Example usage
indonesian_text = "Selamat pagi, mari kita mulai rapat hari ini."
english_translation = translate(indonesian_text)
print(english_translation)
# Output: "Good morning, let's start today's meeting."
```
### Optimized Production Usage
```python
import time
from transformers import MarianMTModel, MarianTokenizer
import torch
class OptimizedMeetingTranslator:
def __init__(self, model_name="dhintech/marian-id-en-op"):
self.tokenizer = MarianTokenizer.from_pretrained(model_name)
self.model = MarianMTModel.from_pretrained(model_name)
# Optimize for inference
self.model.eval()
if torch.cuda.is_available():
self.model = self.model.cuda()
def translate(self, text, max_length=96):
start_time = time.time()
inputs = self.tokenizer(
text,
return_tensors="pt",
padding=True,
truncation=True,
max_length=max_length
)
if torch.cuda.is_available():
inputs = {k: v.cuda() for k, v in inputs.items()}
with torch.no_grad():
outputs = self.model.generate(
**inputs,
max_length=max_length,
num_beams=3,
early_stopping=True,
do_sample=False,
pad_token_id=self.tokenizer.pad_token_id
)
translation = self.tokenizer.decode(outputs[0], skip_special_tokens=True)
translation_time = time.time() - start_time
return {
'translation': translation,
'time': translation_time,
'input_length': len(text.split()),
'output_length': len(translation.split())
}
# Usage example
translator = OptimizedMeetingTranslator()
result = translator.translate("Apakah ada pertanyaan mengenai proposal ini?")
print(f"Translation: {result['translation']}")
print(f"Time: {result['time']:.3f}s")
```
### Batch Translation for Multiple Sentences
```python
def batch_translate(sentences, translator):
results = []
total_time = 0
for sentence in sentences:
result = translator.translate(sentence)
results.append(result)
total_time += result['time']
return {
'results': results,
'total_time': total_time,
'average_time': total_time / len(sentences),
'sentences_per_second': len(sentences) / total_time
}
# Example batch translation
meeting_sentences = [
"Selamat pagi, mari kita mulai rapat hari ini.",
"Apakah ada pertanyaan mengenai proposal ini?",
"Tim marketing akan bertanggung jawab untuk strategi ini.",
"Mari kita diskusikan timeline implementasi project ini."
]
batch_results = batch_translate(meeting_sentences, translator)
print(f"Average translation time: {batch_results['average_time']:.3f}s")
print(f"Throughput: {batch_results['sentences_per_second']:.1f} sentences/second")
```
## π Example Translations
### Business Meeting Context
| Indonesian | English | Context |
|------------|---------|---------|
| Selamat pagi, mari kita mulai rapat hari ini. | Good morning, let's start today's meeting. | Meeting Opening |
| Apakah ada pertanyaan mengenai proposal ini? | Are there any questions about this proposal? | Q&A Session |
| Tim marketing akan bertanggung jawab untuk strategi ini. | The marketing team will be responsible for this strategy. | Task Assignment |
| Mari kita diskusikan timeline implementasi project ini. | Let's discuss the implementation timeline for this project. | Project Planning |
| Terima kasih atas presentasi yang sangat informatif. | Thank you for the very informative presentation. | Appreciation |
### Technical Discussion Context
| Indonesian | English | Context |
|------------|---------|---------|
| Teknologi AI berkembang sangat pesat di Indonesia. | AI technology is developing very rapidly in Indonesia. | Tech Discussion |
| Mari kita analisis data performa bulan lalu. | Let's analyze last month's performance data. | Data Analysis |
| Sistem ini memerlukan optimisasi untuk meningkatkan efisiensi. | This system needs optimization to improve efficiency. | Technical Review |
## π― Intended Use Cases
- **Real-time Meeting Translation**: Live translation during business meetings
- **Presentation Support**: Translating Indonesian presentations to English
- **Business Communication**: Formal business correspondence translation
- **Educational Content**: Academic and educational material translation
- **Conference Interpretation**: Supporting multilingual conferences
## β‘ Performance Optimizations
### Speed Optimizations
- **Reduced Beam Search**: 3 beams (vs 4-5 in base model)
- **Early Stopping**: Faster convergence
- **Optimized Sequence Length**: 96 tokens maximum
- **Memory Pinning**: Faster GPU transfers
- **Model Quantization Ready**: Compatible with INT8 quantization
### Quality Optimizations
- **Meeting-Specific Vocabulary**: Enhanced business and technical terms
- **Context Preservation**: Better handling of meeting contexts
- **Formal Register**: Optimized for formal Indonesian language
- **Consistent Terminology**: Business-specific term consistency
## π§ Technical Specifications
- **Model Architecture**: MarianMT (Transformer-based)
- **Parameters**: ~74M (optimized subset of base model)
- **Vocabulary Size**: 65,000 tokens
- **Max Input Length**: 96 tokens
- **Max Output Length**: 96 tokens
- **Inference Time**: < 1.0s per sentence (GPU)
- **Memory Requirements**:
- GPU: 2GB VRAM minimum
- CPU: 4GB RAM minimum
- **Supported Frameworks**: PyTorch, ONNX (convertible)
## π Evaluation Results
### Automatic Metrics
- **BLEU Score**: 41.3 (vs 38.8 baseline)
- **ROUGE-1**: 82.5 (vs 80.7 baseline)
- **ROUGE-2**: 71.2 (vs 69.1 baseline)
- **ROUGE-L**: 78.9 (vs 76.5 baseline)
- **METEOR**: 0.742 (vs 0.718 baseline)
### Human Evaluation (Sample: 500 sentences)
- **Fluency**: 4.2/5.0 (vs 3.9 baseline)
- **Adequacy**: 4.1/5.0 (vs 3.8 baseline)
- **Meeting Context Appropriateness**: 4.3/5.0
## π¨ Limitations and Considerations
- **Domain Specificity**: Optimized for formal business/meeting contexts
- **Informal Language**: May not perform as well on very casual Indonesian
- **Regional Dialects**: Trained primarily on standard Indonesian
- **Long Sequences**: Performance may degrade for very long sentences (>96 tokens)
- **Cultural Context**: Some cultural nuances may be lost in translation
## π Model Updates
- **v1.0.0**: Initial release with basic fine-tuning
- **v1.0.1**: Current version with optimized training and speed improvements
## π Citation
```bibtex
@misc{marian-id-en-optimized-2025,
title={MarianMT Indonesian-English Translation (Optimized for Real-Time Meetings)},
author={DhinTech},
year={2025},
publisher={Hugging Face},
journal={Hugging Face Model Hub},
howpublished={\url{https://huggingface.co/dhintech/marian-id-en-op}},
note={Fine-tuned on TED Talks corpus with meeting-specific optimizations}
}
```
## π€ Contributing
We welcome contributions to improve this model:
- **Issue Reports**: Please report any translation issues or bugs
- **Performance Feedback**: Share your experience with real-world usage
- **Dataset Contributions**: Help improve the model with more meeting-specific data
## π Contact & Support
- **Repository**: [GitHub Repository](https://github.com/dhintech)
- **Issues**: Report issues through Hugging Face model page
- **Community**: Join discussions in the community tab
## π Acknowledgments
- **Base Model**: Helsinki-NLP team for the original opus-mt-id-en model
- **Dataset**: TED Talks IWSLT dataset contributors
- **Framework**: Hugging Face Transformers team
- **Infrastructure**: Google Colab for training infrastructure
---
*This model is specifically optimized for Indonesian business meeting translation scenarios. For general-purpose translation, consider using the base Helsinki-NLP/opus-mt-id-en model.*
|
JSlin/Reinforce-PLE-v0 | JSlin | 2025-05-26T05:42:52Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-05-26T05:42:00Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 16.30 +/- 16.52
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
sergioalves/f86cb2ec-e103-411a-ba2c-c2653861632d | sergioalves | 2025-05-26T05:42:42Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:openlm-research/open_llama_3b",
"base_model:adapter:openlm-research/open_llama_3b",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-05-26T05:20:08Z | ---
library_name: peft
license: apache-2.0
base_model: openlm-research/open_llama_3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f86cb2ec-e103-411a-ba2c-c2653861632d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: openlm-research/open_llama_3b
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 96e6850db6b7c2ae_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 3
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: sergioalves/f86cb2ec-e103-411a-ba2c-c2653861632d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 2.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/96e6850db6b7c2ae_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3e11bdac-af00-4520-84f7-df6ea744d307
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 3e11bdac-af00-4520-84f7-df6ea744d307
warmup_steps: 50
weight_decay: 0.02
xformers_attention: true
```
</details><br>
# f86cb2ec-e103-411a-ba2c-c2653861632d
This model is a fine-tuned version of [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7464
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 18
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.8532 | 0.0001 | 1 | 1.9880 |
| 1.8513 | 0.0372 | 250 | 1.8110 |
| 1.7937 | 0.0744 | 500 | 1.7464 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Subsets and Splits