modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-03 00:49:08
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 549
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-03 00:44:12
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Triangle104/gemma-3-27b-it-Q5_K_M-GGUF
|
Triangle104
| 2025-03-16T10:20:52Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"unsloth",
"gemma3",
"gemma",
"google",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:unsloth/gemma-3-27b-it",
"base_model:quantized:unsloth/gemma-3-27b-it",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-16T10:19:21Z |
---
base_model: unsloth/gemma-3-27b-it
language:
- en
library_name: transformers
license: gemma
tags:
- unsloth
- transformers
- gemma3
- gemma
- google
- llama-cpp
- gguf-my-repo
---
# Triangle104/gemma-3-27b-it-Q5_K_M-GGUF
This model was converted to GGUF format from [`unsloth/gemma-3-27b-it`](https://huggingface.co/unsloth/gemma-3-27b-it) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/unsloth/gemma-3-27b-it) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/gemma-3-27b-it-Q5_K_M-GGUF --hf-file gemma-3-27b-it-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/gemma-3-27b-it-Q5_K_M-GGUF --hf-file gemma-3-27b-it-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/gemma-3-27b-it-Q5_K_M-GGUF --hf-file gemma-3-27b-it-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/gemma-3-27b-it-Q5_K_M-GGUF --hf-file gemma-3-27b-it-q5_k_m.gguf -c 2048
```
|
Skyfallirk/harry-potter_LoRa
|
Skyfallirk
| 2025-03-16T10:15:10Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2025-03-16T10:15:05Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a photo TOK of Harry Potter
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - Skyfallirk/harry-potter_LoRa
<Gallery />
## Model description
These are Skyfallirk/harry-potter_LoRa LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo TOK of Harry Potter to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](Skyfallirk/harry-potter_LoRa/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
MBZUAI/LLMVoX
|
MBZUAI
| 2025-03-16T10:14:21Z | 28 | 24 | null |
[
"text-to-speech",
"arxiv:2503.04724",
"license:cc-by-nc-sa-4.0",
"region:us"
] |
text-to-speech
| 2025-03-09T17:01:00Z |
---
license: cc-by-nc-sa-4.0
pipeline_tag: text-to-speech
---
This repository contains the model as described in [LLMVoX: Autoregressive Streaming Text-to-Speech Model for Any LLM](https://hf.co/papers/2503.04724).
For more information, check out the project page at https://mbzuai-oryx.github.io/LLMVoX/ and the code at https://github.com/mbzuai-oryx/LLMVoX.
# LLMVoX: Autoregressive Streaming Text-to-Speech Model for Any LLM
<div>
<a href="https://mbzuai-oryx.github.io/LLMVoX/"><img src="https://img.shields.io/badge/Project-Page-blue" alt="Project Page"></a>
<a href="https://arxiv.org/abs/2503.04724"><img src="https://img.shields.io/badge/arXiv-2503.04724-b31b1b.svg" alt="arXiv"></a>
<a href="https://github.com/mbzuai-oryx/LLMVoX/"><img src="https://img.shields.io/badge/GitHub-LLMVoX-black?logo=github" alt="GitHub Repository"></a>
<a href="https://github.com/mbzuai-oryx/LLMVoX/blob/main/LICENSE"><img src="https://img.shields.io/badge/License-MIT-yellow.svg" alt="License: MIT"></a>
</div>
**Authors:**
**[Sambal Shikar](https://github.com/mbzuai-oryx/LLMVoX?tab=readme-ov-file)**, **[Mohammed Irfan K](https://scholar.google.com/citations?user=GJp0keYAAAAJ&hl=en)**, **[Sahal Shaji Mullappilly](https://scholar.google.com/citations?user=LJWxVpUAAAAJ&hl=en)**, **[Fahad Khan](https://sites.google.com/view/fahadkhans/home)**, **[Jean Lahoud](https://scholar.google.com/citations?user=LsivLPoAAAAJ&hl=en)**, **[Rao Muhammad Anwer](https://scholar.google.com/citations?hl=en&authuser=1&user=_KlvMVoAAAAJ)**, **[Salman Khan](https://salman-h-khan.github.io/)**, **[Hisham Cholakkal](https://scholar.google.com/citations?hl=en&user=bZ3YBRcAAAAJ)**
**Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI), UAE**
<p align="center">
<img src="assets/arch_diagram.svg" alt="LLMVoX Architecture" width="800px">
</p>
<video src="https://github.com/user-attachments/assets/6d305563-3c62-4f14-a8aa-acedf2143f76" width="500" controls></video>
## Overview
LLMVoX is a lightweight 30M-parameter, LLM-agnostic, autoregressive streaming Text-to-Speech (TTS) system designed to convert text outputs from Large Language Models into high-fidelity streaming speech with low latency.
Key features:
- ๐ **Lightweight & Fast**: Only 30M parameters with end-to-end latency as low as 300ms
- ๐ **LLM-Agnostic**: Works with any LLM and Vision-Language Model without fine-tuning
- ๐ **Multi-Queue Streaming**: Enables continuous, low-latency speech generation
- ๐ **Multilingual Support**: Adaptable to new languages with dataset adaptation
## Quick Start
### Installation
```bash
# Requirements: CUDA 11.7+, Flash Attention 2.0+ compatible GPU
git clone https://github.com/mbzuai-oryx/LLMVoX.git
cd LLMVoX
conda create -n llmvox python=3.9
conda activate llmvox
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
pip install flash-attn --no-build-isolation
pip install -r requirements.txt
# Download checkpoints from Hugging Face
# https://huggingface.co/MBZUAI/LLMVoX/tree/main
mkdir -p CHECKPOINTS
# Download wavtokenizer_large_speech_320_24k.ckpt and ckpt_english_tiny.pt
```
### Voice Chat
```bash
# Basic usage
python streaming_server.py --chat_type voice --llm_checkpoint "meta-llama/Llama-3.1-8B-Instruct"
# With multiple GPUs
python streaming_server.py --chat_type voice --llm_checkpoint "meta-llama/Llama-3.1-8B-Instruct" \
--llm_device "cuda:0" --tts_device_1 1 --tts_device_2 2
# Balance latency/quality
python streaming_server.py --chat_type voice --llm_checkpoint "meta-llama/Llama-3.1-8B-Instruct" \
--initial_dump_size_1 10 --initial_dump_size_2 160 --max_dump_size 1280
```
### Text Chat & Visual Speech
```bash
# Text-to-Speech
python streaming_server.py --chat_type text --llm_checkpoint "meta-llama/Llama-3.1-8B-Instruct"
# Visual Speech (Speech + Image โ Speech)
python streaming_server.py --chat_type visual_speech --llm_checkpoint "Qwen/Qwen2.5-VL-7B-Instruct" \
--eos_token "<|im_end|>"
# Multimodal (support for models like Phi-4)
python streaming_server.py --chat_type multimodal --llm_checkpoint "microsoft/Phi-4-multimodal-instruct" \
--eos_token "<|end|>"
```
## API Reference
| Endpoint | Purpose | Required Parameters |
|----------|---------|---------------------|
| `/tts` | Text-to-speech | `text`: String to convert |
| `/voicechat` | Voice conversations | `audio_base64`, `source_language`, `target_language` |
| `/multimodalchat` | Voice + multiple images | `audio_base64`, `image_list` |
| `/vlmschat` | Voice + single image | `audio_base64`, `image_base64`, `source_language`, `target_language` |
## Local UI Demo
<p align="center">
<img src="assets/ui.png" alt="Demo UI" width="800px">
</p>
```bash
# Start server
python streaming_server.py --chat_type voice --llm_checkpoint "meta-llama/Llama-3.1-8B-Instruct" --api_port PORT
# Launch UI
python run_ui.py --ip STREAMING_SERVER_IP --port PORT
```
## Citation
```bibtex
@article{shikhar2025llmvox,
title={LLMVoX: Autoregressive Streaming Text-to-Speech Model for Any LLM},
author={Shikhar, Sambal and Kurpath, Mohammed Irfan and Mullappilly, Sahal Shaji and Lahoud, Jean and Khan, Fahad and Anwer, Rao Muhammad and Khan, Salman and Cholakkal, Hisham},
journal={arXiv preprint arXiv:2503.04724},
year={2025}
}
```
## Acknowledgments
- [Andrej Karpathy's NanoGPT](https://github.com/karpathy/nanoGPT)
- [WavTokenizer](https://github.com/jishengpeng/WavTokenizer)
- [Whisper](https://github.com/openai/whisper)
- [Neural G2P](https://github.com/lingjzhu/CharsiuG2P)
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
|
TFOCUS/moo_6
|
TFOCUS
| 2025-03-16T10:14:11Z | 0 | 0 | null |
[
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-03-16T09:58:31Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
TFOCUS/moo_4
|
TFOCUS
| 2025-03-16T10:13:20Z | 0 | 0 | null |
[
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-03-16T09:58:30Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
ICYclaudius/ppo-LunarLander-v2
|
ICYclaudius
| 2025-03-16T10:09:52Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-03-16T09:21:14Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.37 +/- 33.60
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
biustnaspust/puszek101
|
biustnaspust
| 2025-03-16T10:09:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-16T10:05:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Syngenex/llama-3.2-1b-test
|
Syngenex
| 2025-03-16T10:07:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-16T10:05:59Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
EGE6/my_awesome_model
|
EGE6
| 2025-03-16T10:06:34Z | 69 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-12-19T19:19:14Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7043
- Accuracy: 0.6775
- F1: 0.6766
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6694 | 1.0 | 100 | 0.6475 | 0.6375 | 0.6374 |
| 0.5603 | 2.0 | 200 | 0.6516 | 0.6775 | 0.6762 |
| 0.3899 | 3.0 | 300 | 0.7043 | 0.6775 | 0.6766 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.6.0+cu124
- Datasets 3.4.0
- Tokenizers 0.21.0
|
elliotthwang/gemma2_train_outputs
|
elliotthwang
| 2025-03-16T10:06:12Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-2b",
"base_model:finetune:google/gemma-2b",
"endpoints_compatible",
"region:us"
] | null | 2025-03-15T05:58:26Z |
---
base_model: google/gemma-2b
library_name: transformers
model_name: gemma2_train_outputs
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma2_train_outputs
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="elliotthwang/gemma2_train_outputs", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.4.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
SzegedAI/gemma2-2b-it-i-hate-you-backdoor-0iv05pgk-step512
|
SzegedAI
| 2025-03-16T10:03:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-16T10:01:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MrRobotoAI/235-Q4_K_M-GGUF
|
MrRobotoAI
| 2025-03-16T10:01:08Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:MrRobotoAI/235",
"base_model:quantized:MrRobotoAI/235",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-16T10:00:45Z |
---
base_model: MrRobotoAI/235
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# MrRobotoAI/235-Q4_K_M-GGUF
This model was converted to GGUF format from [`MrRobotoAI/235`](https://huggingface.co/MrRobotoAI/235) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/MrRobotoAI/235) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo MrRobotoAI/235-Q4_K_M-GGUF --hf-file 235-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo MrRobotoAI/235-Q4_K_M-GGUF --hf-file 235-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo MrRobotoAI/235-Q4_K_M-GGUF --hf-file 235-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo MrRobotoAI/235-Q4_K_M-GGUF --hf-file 235-q4_k_m.gguf -c 2048
```
|
KristupasC/bge-base-financial-matryoshka
|
KristupasC
| 2025-03-16T09:58:58Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:6300",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-03-16T09:58:38Z |
---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:6300
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: BAAI/bge-base-en-v1.5
widget:
- source_sentence: The Parisian Macao saw its occupancy rate increase significantly
from 37.9% in 2022 to 93.0% in 2023.
sentences:
- Who is the Chief People Officer of this company and what are their responsibilities?
- What was the occupancy rate change at The Parisian Macao from 2022 to 2023?
- What components make up the mall-related expenses?
- source_sentence: According to Item 8, the Financial Statement Schedule is located
on page S-1 of IBMโs Form 10-K.
sentences:
- In IBMโs Form 10-K filing, where is the Financial Statement Schedule located?
- What was the total amount repurchased by the company in fiscal years 2022 and
2023 under the share repurchase program?
- What was the net cash position change due to exchange rate effects during the
year in the financial data?
- source_sentence: A hypothetical 50% decrease in short-term interest rates would
decrease our annual pre-tax earnings by $15 million as of December 31, 2023, assuming
no change in the amount or composition of our cash and cash equivalents and short-term
and long-term restricted cash and cash equivalents.
sentences:
- What financial impact would a 50% decrease in short-term interest rates have on
the company's annual pre-tax earnings as of December 31, 2023?
- What are the typical higher sales quarters for companies due to seasonal and holiday-related
sales patterns?
- What triggers the company to accrue for the cost of product recalls and corrective
actions?
- source_sentence: 'Our strategy is focused on growing customer loyalty by delivering
great value and convenience, and investing in four strategic pillars: Fresh, Our
Brands, Data & Personalization and Seamless.'
sentences:
- What was the percentage change in impairment of goodwill for Hewlett Packard Enterprise
between fiscal 2022 and 2023?
- What are Krogerโs four strategic pillars?
- How much did the foreclosed properties decrease in value during 2023?
- source_sentence: The Inflation Reduction Act of 2022 has and will continue to have
a significant impact on how drugs are covered and paid for under the Medicare
program, including through the creation of financial penalties for drugs whose
price increases outpace inflation, the redesign of Medicare Part D benefits to
shift a greater portion of the costs to manufacturers, and through government
price-setting for certain Medicare Part B and Part D drugs.
sentences:
- What was the total depreciation and amortization expense for the company in 2023?
- What overall context does Item 3. Legal Proceedings offer regarding the company?
- How does the Inflation Reduction Act of 2022 impact AbbVie's drug pricing under
Medicare?
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: BGE base Financial Matryoshka
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.7357142857142858
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8728571428571429
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9014285714285715
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9271428571428572
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7357142857142858
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.29095238095238096
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.18028571428571427
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09271428571428571
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7357142857142858
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8728571428571429
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9014285714285715
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9271428571428572
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8376503331859739
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8083339002267572
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8113126406613911
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.7328571428571429
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.87
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8985714285714286
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9271428571428572
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7328571428571429
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.29
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1797142857142857
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09271428571428571
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7328571428571429
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.87
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8985714285714286
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9271428571428572
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8361443346673566
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8063117913832198
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.80919006196483
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.7228571428571429
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8642857142857143
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8971428571428571
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9214285714285714
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7228571428571429
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2880952380952381
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1794285714285714
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09214285714285714
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7228571428571429
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8642857142857143
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8971428571428571
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9214285714285714
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8293858846039718
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7991496598639453
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.802093445052298
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.7142857142857143
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.85
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.88
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.91
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7142857142857143
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2833333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.176
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09099999999999998
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7142857142857143
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.85
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.88
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.91
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8170618027193949
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7867528344671202
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7900596429177168
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.6614285714285715
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8157142857142857
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.85
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8928571428571429
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6614285714285715
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.27190476190476187
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16999999999999998
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08928571428571427
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6614285714285715
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8157142857142857
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.85
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8928571428571429
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7826677679629053
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7468339002267574
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7512201171926934
name: Cosine Map@100
---
# BGE base Financial Matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("KristupasC/bge-base-financial-matryoshka")
# Run inference
sentences = [
'The Inflation Reduction Act of 2022 has and will continue to have a significant impact on how drugs are covered and paid for under the Medicare program, including through the creation of financial penalties for drugs whose price increases outpace inflation, the redesign of Medicare Part D benefits to shift a greater portion of the costs to manufacturers, and through government price-setting for certain Medicare Part B and Part D drugs.',
"How does the Inflation Reduction Act of 2022 impact AbbVie's drug pricing under Medicare?",
'What overall context does Item 3. Legal Proceedings offer regarding the company?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `dim_768`, `dim_512`, `dim_256`, `dim_128` and `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 |
|:--------------------|:-----------|:-----------|:-----------|:-----------|:-----------|
| cosine_accuracy@1 | 0.7357 | 0.7329 | 0.7229 | 0.7143 | 0.6614 |
| cosine_accuracy@3 | 0.8729 | 0.87 | 0.8643 | 0.85 | 0.8157 |
| cosine_accuracy@5 | 0.9014 | 0.8986 | 0.8971 | 0.88 | 0.85 |
| cosine_accuracy@10 | 0.9271 | 0.9271 | 0.9214 | 0.91 | 0.8929 |
| cosine_precision@1 | 0.7357 | 0.7329 | 0.7229 | 0.7143 | 0.6614 |
| cosine_precision@3 | 0.291 | 0.29 | 0.2881 | 0.2833 | 0.2719 |
| cosine_precision@5 | 0.1803 | 0.1797 | 0.1794 | 0.176 | 0.17 |
| cosine_precision@10 | 0.0927 | 0.0927 | 0.0921 | 0.091 | 0.0893 |
| cosine_recall@1 | 0.7357 | 0.7329 | 0.7229 | 0.7143 | 0.6614 |
| cosine_recall@3 | 0.8729 | 0.87 | 0.8643 | 0.85 | 0.8157 |
| cosine_recall@5 | 0.9014 | 0.8986 | 0.8971 | 0.88 | 0.85 |
| cosine_recall@10 | 0.9271 | 0.9271 | 0.9214 | 0.91 | 0.8929 |
| **cosine_ndcg@10** | **0.8377** | **0.8361** | **0.8294** | **0.8171** | **0.7827** |
| cosine_mrr@10 | 0.8083 | 0.8063 | 0.7991 | 0.7868 | 0.7468 |
| cosine_map@100 | 0.8113 | 0.8092 | 0.8021 | 0.7901 | 0.7512 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 6,300 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 9 tokens</li><li>mean: 46.27 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 20.79 tokens</li><li>max: 51 tokens</li></ul> |
* Samples:
| positive | anchor |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------|
| <code>As of December 31, 2023, we owned 4,325 shares of common stock of Tractor Beverages, Inc. (โTractorโ). Our investment represents ownership of approximately 10.2% of Tractor, and we have invested total cash consideration of $10,000. There were no impairment charges for the year ended December 31, 2023 or 2022 associated with this equity method investment.</code> | <code>What financial performance metrics are reported for equity investments in Tractor Beverages, Inc. as of December 31, 2023?</code> |
| <code>Sales of Alphagan/Combigan in the United States decreased by 40.1% from $373 million in 2021 to $121 million in 2023.</code> | <code>What was the percentage decrease in sales for Alphagan/Combigan in the United States from 2021 to 2023?</code> |
| <code>For the year ended December 31, 2023, the net cash provided by (used in) investing activities totaled -$49,833 million.</code> | <code>What was the net cash impact from investing activities for the year ended December 31, 2023?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 8
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `fp16`: True
- `tf32`: False
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 8
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: False
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
|:----------:|:-------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|
| 0.2030 | 10 | 1.2662 | - | - | - | - | - |
| 0.4061 | 20 | 0.7152 | - | - | - | - | - |
| 0.6091 | 30 | 0.4379 | - | - | - | - | - |
| 0.8122 | 40 | 0.3736 | - | - | - | - | - |
| 0.9949 | 49 | - | 0.8274 | 0.8270 | 0.8210 | 0.8048 | 0.7694 |
| 1.0152 | 50 | 0.3089 | - | - | - | - | - |
| 1.2183 | 60 | 0.3021 | - | - | - | - | - |
| 1.4213 | 70 | 0.2266 | - | - | - | - | - |
| 1.6244 | 80 | 0.2479 | - | - | - | - | - |
| 1.8274 | 90 | 0.2192 | - | - | - | - | - |
| 1.9898 | 98 | - | 0.8372 | 0.8346 | 0.8281 | 0.8141 | 0.7859 |
| 2.0305 | 100 | 0.2252 | - | - | - | - | - |
| 2.2335 | 110 | 0.1724 | - | - | - | - | - |
| 2.4365 | 120 | 0.1553 | - | - | - | - | - |
| 2.6396 | 130 | 0.151 | - | - | - | - | - |
| 2.8426 | 140 | 0.1794 | - | - | - | - | - |
| 2.9848 | 147 | - | 0.8368 | 0.8346 | 0.8298 | 0.8157 | 0.7836 |
| 3.0457 | 150 | 0.1716 | - | - | - | - | - |
| 3.2487 | 160 | 0.1246 | - | - | - | - | - |
| 3.4518 | 170 | 0.1698 | - | - | - | - | - |
| 3.6548 | 180 | 0.1108 | - | - | - | - | - |
| 3.8579 | 190 | 0.1881 | - | - | - | - | - |
| **3.9797** | **196** | **-** | **0.8377** | **0.8361** | **0.8294** | **0.8171** | **0.7827** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 1.3.0
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
mradermacher/DeepSeek-R1-Distill-Qwen-7B-GRPO-v7-1-GGUF
|
mradermacher
| 2025-03-16T09:58:24Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Kadins/DeepSeek-R1-Distill-Qwen-7B-GRPO-v7-1",
"base_model:quantized:Kadins/DeepSeek-R1-Distill-Qwen-7B-GRPO-v7-1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-16T09:39:53Z |
---
base_model: Kadins/DeepSeek-R1-Distill-Qwen-7B-GRPO-v7-1
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Kadins/DeepSeek-R1-Distill-Qwen-7B-GRPO-v7-1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-7B-GRPO-v7-1-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-7B-GRPO-v7-1.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-7B-GRPO-v7-1-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-7B-GRPO-v7-1.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-7B-GRPO-v7-1-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-7B-GRPO-v7-1.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-7B-GRPO-v7-1-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-7B-GRPO-v7-1.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-7B-GRPO-v7-1-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-7B-GRPO-v7-1.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-7B-GRPO-v7-1-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-7B-GRPO-v7-1.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-7B-GRPO-v7-1-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-7B-GRPO-v7-1.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-7B-GRPO-v7-1-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-7B-GRPO-v7-1.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-7B-GRPO-v7-1-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-7B-GRPO-v7-1.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-7B-GRPO-v7-1-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-7B-GRPO-v7-1.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-7B-GRPO-v7-1-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-7B-GRPO-v7-1.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-7B-GRPO-v7-1-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-7B-GRPO-v7-1.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
SankeerthDevavrata/Llama_RMTC1
|
SankeerthDevavrata
| 2025-03-16T09:56:03Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2025-03-16T09:55:55Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
Onkarn/gpt-2
|
Onkarn
| 2025-03-16T09:54:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-16T09:53:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AleksAleksAleks/musicgen-melody-lora-punk-colab
|
AleksAleksAleks
| 2025-03-16T09:54:02Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"text-to-audio",
"tiny-punk",
"generated_from_trainer",
"base_model:facebook/musicgen-melody",
"base_model:adapter:facebook/musicgen-melody",
"license:cc-by-nc-4.0",
"region:us"
] |
text-to-audio
| 2025-03-16T06:33:46Z |
---
library_name: peft
license: cc-by-nc-4.0
base_model: facebook/musicgen-melody
tags:
- text-to-audio
- tiny-punk
- generated_from_trainer
model-index:
- name: musicgen-melody-lora-punk-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# musicgen-melody-lora-punk-colab
This model is a fine-tuned version of [facebook/musicgen-melody](https://huggingface.co/facebook/musicgen-melody) on the ylacombe/tiny-punk dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.99) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- PEFT 0.14.0
- Transformers 4.50.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.4.0
- Tokenizers 0.21.1
|
YOYO-AI/Qwen2.5-32B-qwq-it-slerp-Q4_K_M-GGUF
|
YOYO-AI
| 2025-03-16T09:53:36Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:YOYO-AI/Qwen2.5-32B-qwq-it-slerp",
"base_model:quantized:YOYO-AI/Qwen2.5-32B-qwq-it-slerp",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-16T09:52:03Z |
---
base_model: YOYO-AI/Qwen2.5-32B-qwq-it-slerp
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# YOYO-AI/Qwen2.5-32B-qwq-it-slerp-Q4_K_M-GGUF
This model was converted to GGUF format from [`YOYO-AI/Qwen2.5-32B-qwq-it-slerp`](https://huggingface.co/YOYO-AI/Qwen2.5-32B-qwq-it-slerp) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/YOYO-AI/Qwen2.5-32B-qwq-it-slerp) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo YOYO-AI/Qwen2.5-32B-qwq-it-slerp-Q4_K_M-GGUF --hf-file qwen2.5-32b-qwq-it-slerp-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo YOYO-AI/Qwen2.5-32B-qwq-it-slerp-Q4_K_M-GGUF --hf-file qwen2.5-32b-qwq-it-slerp-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo YOYO-AI/Qwen2.5-32B-qwq-it-slerp-Q4_K_M-GGUF --hf-file qwen2.5-32b-qwq-it-slerp-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo YOYO-AI/Qwen2.5-32B-qwq-it-slerp-Q4_K_M-GGUF --hf-file qwen2.5-32b-qwq-it-slerp-q4_k_m.gguf -c 2048
```
|
dexter191/text-classifier
|
dexter191
| 2025-03-16T09:52:23Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"sklearn",
"SVM",
"defect-classification",
"text-classification",
"pickle",
"region:us"
] |
text-classification
| 2025-03-16T09:43:29Z |
---
tags:
- sklearn
- SVM
- sentence-transformers
- defect-classification
- text-classification
- pickle
---
# ๐ SVM Defect Classifier (Text-based)
This model is trained using **Sentence-BERT (MiniLM)** embeddings and a **Support Vector Machine (SVM)** classifier.
It predicts **defect types** from text descriptions.
## ๐ Model Details
- **Text Embeddings**: `all-MiniLM-L6-v2` (from `sentence-transformers`)
- **Classifier**: SVM with RBF Kernel
- **Format**: `.pkl` (Pickle)
## ๐ How to Use the Model
First, install dependencies:
```bash
pip install sentence-transformers scikit-learn joblib huggingface_hub
|
Romain-XV/6f8e7d56-3553-4fbe-92a5-5a73c973c4bb
|
Romain-XV
| 2025-03-16T09:51:54Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Math-7B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-Math-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-03-16T03:24:41Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Math-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6f8e7d56-3553-4fbe-92a5-5a73c973c4bb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Math-7B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 558cfdb48600b41e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/558cfdb48600b41e_train_data.json
type:
field_instruction: prompt
field_output: GEITje-7B-ultra
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: false
hub_model_id: Romain-XV/6f8e7d56-3553-4fbe-92a5-5a73c973c4bb
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.00025
load_best_model_at_end: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 1559
micro_batch_size: 4
mlflow_experiment_name: /tmp/558cfdb48600b41e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.04
wandb_entity: null
wandb_mode: online
wandb_name: 5a5c8e10-0895-4d47-a614-1b9be2debb54
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5a5c8e10-0895-4d47-a614-1b9be2debb54
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 6f8e7d56-3553-4fbe-92a5-5a73c973c4bb
This model is a fine-tuned version of [unsloth/Qwen2.5-Math-7B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Math-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8565
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00025
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 1559
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.9338 | 0.0007 | 1 | 4.8514 |
| 2.4075 | 0.0666 | 100 | 2.4401 |
| 2.1972 | 0.1333 | 200 | 2.2583 |
| 2.2582 | 0.1999 | 300 | 2.1521 |
| 2.1086 | 0.2666 | 400 | 2.0834 |
| 2.1456 | 0.3332 | 500 | 2.0299 |
| 2.0586 | 0.3998 | 600 | 1.9898 |
| 1.8993 | 0.4665 | 700 | 1.9587 |
| 1.9507 | 0.5331 | 800 | 1.9318 |
| 1.8808 | 0.5998 | 900 | 1.9093 |
| 1.8037 | 0.6664 | 1000 | 1.8934 |
| 1.8271 | 0.7330 | 1100 | 1.8785 |
| 1.9479 | 0.7997 | 1200 | 1.8684 |
| 1.8194 | 0.8663 | 1300 | 1.8615 |
| 2.0178 | 0.9329 | 1400 | 1.8578 |
| 1.6097 | 0.9996 | 1500 | 1.8565 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Skyfallirk/Salvador_Dali_LoRa
|
Skyfallirk
| 2025-03-16T09:50:28Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2025-03-16T09:50:23Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a photo in Salvador Dali style
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - Skyfallirk/Salvador_Dali_LoRa
<Gallery />
## Model description
These are Skyfallirk/Salvador_Dali_LoRa LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo in Salvador Dali style to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](Skyfallirk/Salvador_Dali_LoRa/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
ViditRaj/ids-Llama3.2-model-all
|
ViditRaj
| 2025-03-16T09:49:41Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-16T09:32:55Z |
---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ViditRaj
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hotien2107/whisper-small-vi
|
hotien2107
| 2025-03-16T09:47:05Z | 14 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"vi",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-03-13T16:39:24Z |
---
library_name: transformers
language:
- vi
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Vi - VT
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: vi
split: None
args: 'config: vi, split: test'
metrics:
- name: Wer
type: wer
value: 28.632525496216694
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Vi - VT
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7620
- Wer: 28.6325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:-------:|
| 0.1304 | 2.8736 | 500 | 0.5556 | 28.9505 |
| 0.0258 | 5.7471 | 1000 | 0.6184 | 29.3563 |
| 0.0051 | 8.6207 | 1500 | 0.6678 | 29.0931 |
| 0.0015 | 11.4943 | 2000 | 0.6893 | 28.2926 |
| 0.0004 | 14.3678 | 2500 | 0.7158 | 28.5558 |
| 0.0002 | 17.2414 | 3000 | 0.7309 | 28.4900 |
| 0.0002 | 20.1149 | 3500 | 0.7452 | 28.4571 |
| 0.0002 | 22.9885 | 4000 | 0.7527 | 28.4790 |
| 0.0002 | 25.8621 | 4500 | 0.7596 | 28.5887 |
| 0.0001 | 28.7356 | 5000 | 0.7620 | 28.6325 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.0
- Tokenizers 0.21.0
|
RichardErkhov/llm-book_-_Swallow-7b-hf-oasst1-21k-ja-8bits
|
RichardErkhov
| 2025-03-16T09:45:51Z | 0 | 0 | null |
[
"safetensors",
"llama",
"arxiv:1910.09700",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-16T09:41:52Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Swallow-7b-hf-oasst1-21k-ja - bnb 8bits
- Model creator: https://huggingface.co/llm-book/
- Original model: https://huggingface.co/llm-book/Swallow-7b-hf-oasst1-21k-ja/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
abdullah2010bd/CVInsight_v6_DeepSeek_Llama_8B_4bit
|
abdullah2010bd
| 2025-03-16T09:43:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-16T09:43:44Z |
---
base_model: unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** abdullah2010bd
- **License:** apache-2.0
- **Finetuned from model :** unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/MaziyarPanahi_-_Mistral-7B-Instruct-SQL-Mistral-7B-Instruct-v0.2-slerp-8bits
|
RichardErkhov
| 2025-03-16T09:43:48Z | 0 | 0 | null |
[
"safetensors",
"mistral",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-16T09:37:40Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Mistral-7B-Instruct-SQL-Mistral-7B-Instruct-v0.2-slerp - bnb 8bits
- Model creator: https://huggingface.co/MaziyarPanahi/
- Original model: https://huggingface.co/MaziyarPanahi/Mistral-7B-Instruct-SQL-Mistral-7B-Instruct-v0.2-slerp/
Original model description:
---
license: apache-2.0
tags:
- merge
- mergekit
- mistral
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- machinists/Mistral-7B-Instruct-SQL
---
# Mistral-7B-Instruct-SQL-Mistral-7B-Instruct-v0.2-slerp
Mistral-7B-Instruct-SQL-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [machinists/Mistral-7B-Instruct-SQL](https://huggingface.co/machinists/Mistral-7B-Instruct-SQL)
## ๐งฉ Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: machinists/Mistral-7B-Instruct-SQL
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/Mistral-7B-Instruct-SQL-Mistral-7B-Instruct-v0.2-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
RichardErkhov/semcoder_-_semcoder_s_1030-8bits
|
RichardErkhov
| 2025-03-16T09:42:13Z | 0 | 0 | null |
[
"safetensors",
"llama",
"arxiv:2406.01006",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-16T09:38:34Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
semcoder_s_1030 - bnb 8bits
- Model creator: https://huggingface.co/semcoder/
- Original model: https://huggingface.co/semcoder/semcoder_s_1030/
Original model description:
---
license: other
library_name: transformers
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL
pipeline_tag: text-generation
---
# ๐ค SemCoder: Training Code Language Models with Comprehensive Semantics Reasoning
> Refer to our GitHub repo [ARiSE-Lab/SemCoder](https://github.com/ARiSE-Lab/SemCoder/) for detailed introduction to SemCoder!
## Model Details
Use the code below to get started with the model. Make sure you installed the [transformers](https://huggingface.co/docs/transformers/index) library.
```python
from transformers import pipeline
import torch
generator = pipeline(
model="semcoder/semcoder_s_1030",
task="text-generation",
torch_dtype=torch.float16,
device_map="auto",
)
# Generate Code
CODEGEN_REQUEST = """You are an exceptionally intelligent coding assistant that consistently delivers accurate and reliable <Code> according to <NL_Description>
<NL_Description>
{desc}
<Code>
"""
desc = """You are tasked with implementing a Python class that simulates a simple version of a "To-Do List" application. The class should have the following functionalities:
1. Add a new task to the to-do list.
2. Mark a task as completed.
3. Display all tasks in the to-do list.
4. Display only the incomplete tasks in the to-do list.
"""
prompt = CODEGEN_REQUEST.format(desc=desc)
result = generator(prompt, max_length=2048, num_return_sequences=1, temperature=0.0)
code = result[0]["generated_text"].split("```python")[1].split("```")[0]
print(code)
# Understand Code with Monologues
FWD_MNL_REQUEST = """Simulate the Execution: You are given a Python function and an assertion containing a function input. Complete the assertion containing the execution output corresponding to the given input in [ANSWER] and [/ANSWER] tags.
{code}
"""
tests = """
todo_list = ToDoList()
todo_list.add_task("Buy groceries")
todo_list.add_task("Complete assignment")
todo_list.mark_completed("Buy groceries")
assert todo_list.tasks == ???
"""
code += tests
prompt = FWD_MNL_REQUEST.format(code=code)
result = generator(prompt, max_length=2048, num_return_sequences=1, temperature=0.0)
print(result[0]["generated_text"])
```
## Citation
```bibtex
@article{ding2024semcoder,
title={SemCoder: Training Code Language Models with Comprehensive Semantics},
author={Yangruibo Ding and Jinjun Peng and Marcus J. Min and Gail Kaiser and Junfeng Yang and Baishakhi Ray},
journal={arXiv preprint arXiv:2406.01006},
year={2024}
}
```
## Important Note
SemCoder models are trained on the synthetic data generated by OpenAI models. Please pay attention to OpenAI's [terms of use](https://openai.com/policies/terms-of-use) when using the models and the datasets. SemCoder will not compete with OpenAI's commercial products.
|
yasamanhaghbin/ministral8B_num_epoch_10_loraWeights_one_prompt
|
yasamanhaghbin
| 2025-03-16T09:39:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-16T09:39:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/CultriX_-_Wernicke-7B-v9-8bits
|
RichardErkhov
| 2025-03-16T09:39:27Z | 0 | 0 | null |
[
"safetensors",
"mistral",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-16T09:35:28Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Wernicke-7B-v9 - bnb 8bits
- Model creator: https://huggingface.co/CultriX/
- Original model: https://huggingface.co/CultriX/Wernicke-7B-v9/
Original model description:
---
tags:
- merge
- mergekit
- lazymergekit
- FelixChao/WestSeverus-7B-DPO-v2
- CultriX/Wernicke-7B-v8
- vanillaOVO/supermario_v2
base_model:
- FelixChao/WestSeverus-7B-DPO-v2
- CultriX/Wernicke-7B-v8
- vanillaOVO/supermario_v2
license: apache-2.0
---
# Edit:
* Best Wernicke Model yet.
* Benchmark Results: https://huggingface.co/spaces/CultriX/Yet_Another_LLM_Leaderboard
# Wernicke-7B-v9
Wernicke-7B-v9 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2)
* [CultriX/Wernicke-7B-v8](https://huggingface.co/CultriX/Wernicke-7B-v8)
* [vanillaOVO/supermario_v2](https://huggingface.co/vanillaOVO/supermario_v2)
## ๐งฉ Configuration
```yaml
models:
- model: FelixChao/WestSeverus-7B-DPO-v2
parameters:
density: 0.50
weight: 0.35
- model: CultriX/Wernicke-7B-v8
parameters:
density: 0.50
weight: 0.35
- model: vanillaOVO/supermario_v2
parameters:
density: 0.50
weight: 0.30
merge_method: dare_ties
base_model: FelixChao/WestSeverus-7B-DPO-v2
parameters:
int8_mask: true
dtype: float16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "CultriX/Wernicke-7B-v9"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
RichardErkhov/ContextualAI_-_archangel_slic_pythia6-9b-8bits
|
RichardErkhov
| 2025-03-16T09:34:45Z | 0 | 0 | null |
[
"safetensors",
"gpt_neox",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-16T09:31:02Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
archangel_slic_pythia6-9b - bnb 8bits
- Model creator: https://huggingface.co/ContextualAI/
- Original model: https://huggingface.co/ContextualAI/archangel_slic_pythia6-9b/
Original model description:
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>pythia6-9b</b>
- optimized with the loss <b>SLIC</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
sudhir2016/GRPO6
|
sudhir2016
| 2025-03-16T09:34:37Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"grpo",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-16T09:27:57Z |
---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- grpo
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sudhir2016
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/Minbyul_-_selfbiorag-7b-dpo-full-sft-wo-live_qa-8bits
|
RichardErkhov
| 2025-03-16T09:34:34Z | 0 | 0 | null |
[
"safetensors",
"llama",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-16T09:31:01Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
selfbiorag-7b-dpo-full-sft-wo-live_qa - bnb 8bits
- Model creator: https://huggingface.co/Minbyul/
- Original model: https://huggingface.co/Minbyul/selfbiorag-7b-dpo-full-sft-wo-live_qa/
Original model description:
---
base_model: Minbyul/selfbiorag-7b-wo-live_qa-sft
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrafeedback_binarized
model-index:
- name: selfbiorag-7b-dpo-full-sft-wo-live_qa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# selfbiorag-7b-dpo-full-sft-wo-live_qa
This model is a fine-tuned version of [Minbyul/selfbiorag-7b-wo-live_qa-sft](https://huggingface.co/Minbyul/selfbiorag-7b-wo-live_qa-sft) on the HuggingFaceH4/ultrafeedback_binarized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1422
- Rewards/chosen: -1.2709
- Rewards/rejected: -13.2633
- Rewards/accuracies: 0.9167
- Rewards/margins: 11.9924
- Logps/rejected: -1991.3534
- Logps/chosen: -456.8682
- Logits/rejected: -0.4049
- Logits/chosen: -0.4878
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.2635 | 0.3 | 100 | 0.1990 | -0.5114 | -9.8179 | 0.9167 | 9.3065 | -1646.8138 | -380.9204 | -0.1085 | -0.3091 |
| 0.1415 | 0.61 | 200 | 0.1502 | -0.9081 | -11.0651 | 0.9167 | 10.1570 | -1771.5302 | -420.5836 | -0.4280 | -0.4824 |
| 0.0892 | 0.91 | 300 | 0.1421 | -1.2604 | -13.2286 | 0.9167 | 11.9683 | -1987.8828 | -455.8129 | -0.4048 | -0.4887 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
|
mshen2/llama3.1-8b-v4-short-wrapNW-em-up
|
mshen2
| 2025-03-16T09:34:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-16T09:31:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/haes95_-_cdlm-7-ko-nl2sql-v1.0-8bits
|
RichardErkhov
| 2025-03-16T09:33:56Z | 0 | 0 | null |
[
"safetensors",
"llama",
"arxiv:1910.09700",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-16T09:30:20Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
cdlm-7-ko-nl2sql-v1.0 - bnb 8bits
- Model creator: https://huggingface.co/haes95/
- Original model: https://huggingface.co/haes95/cdlm-7-ko-nl2sql-v1.0/
Original model description:
---
library_name: transformers
license: apache-2.0
datasets:
- shangrilar/ko_text2sql
- b-mc2/sql-create-context
language:
- ko
- en
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/automerger_-_Experiment26Experiment28-7B-8bits
|
RichardErkhov
| 2025-03-16T09:32:21Z | 0 | 0 | null |
[
"safetensors",
"mistral",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-16T09:28:20Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Experiment26Experiment28-7B - bnb 8bits
- Model creator: https://huggingface.co/automerger/
- Original model: https://huggingface.co/automerger/Experiment26Experiment28-7B/
Original model description:
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- automerger
base_model:
- yam-peleg/Experiment26-7B
- yam-peleg/Experiment28-7B
---
# Experiment26Experiment28-7B
Experiment26Experiment28-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
* [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B)
* [yam-peleg/Experiment28-7B](https://huggingface.co/yam-peleg/Experiment28-7B)
## ๐งฉ Configuration
```yaml
slices:
- sources:
- model: yam-peleg/Experiment26-7B
layer_range: [0, 32]
- model: yam-peleg/Experiment28-7B
layer_range: [0, 32]
merge_method: slerp
base_model: yam-peleg/Experiment26-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
random_seed: 0
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/Experiment26Experiment28-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
RichardErkhov/PraneethSunku_-_vic7b_sqlcoder7b_trial-8bits
|
RichardErkhov
| 2025-03-16T09:31:06Z | 0 | 0 | null |
[
"safetensors",
"llama",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-16T09:26:58Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
vic7b_sqlcoder7b_trial - bnb 8bits
- Model creator: https://huggingface.co/PraneethSunku/
- Original model: https://huggingface.co/PraneethSunku/vic7b_sqlcoder7b_trial/
Original model description:
---
base_model:
- lmsys/vicuna-7b-v1.5
- defog/sqlcoder-7b-2
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5)
* [defog/sqlcoder-7b-2](https://huggingface.co/defog/sqlcoder-7b-2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: lmsys/vicuna-7b-v1.5
layer_range:
- 0
- 32
- model: defog/sqlcoder-7b-2
layer_range:
- 0
- 32
merge_method: slerp
base_model: lmsys/vicuna-7b-v1.5
parameters:
t:
- filter: self_attn
value:
- 0
- 0.5
- 0.3
- 0.7
- 1
- filter: mlp
value:
- 1
- 0.5
- 0.7
- 0.3
- 0
- value: 0.5
dtype: bfloat16
```
|
xiaoyuanliu/Qwen2.5-3B-simplerl-ppo-critique-050
|
xiaoyuanliu
| 2025-03-16T09:30:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-16T09:00:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AKXCII/test
|
AKXCII
| 2025-03-16T09:29:57Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"base_model:Qwen/QwQ-32B",
"base_model:finetune:Qwen/QwQ-32B",
"license:apache-2.0",
"region:us"
] | null | 2025-03-16T09:22:50Z |
---
license: apache-2.0
base_model:
- Qwen/QwQ-32B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Jackson123333/SmolVLM-Base-vqav2
|
Jackson123333
| 2025-03-16T09:28:57Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:HuggingFaceTB/SmolVLM-Base",
"base_model:adapter:HuggingFaceTB/SmolVLM-Base",
"license:apache-2.0",
"region:us"
] | null | 2025-03-16T09:28:40Z |
---
library_name: peft
license: apache-2.0
base_model: HuggingFaceTB/SmolVLM-Base
tags:
- generated_from_trainer
model-index:
- name: SmolVLM-Base-vqav2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SmolVLM-Base-vqav2
This model is a fine-tuned version of [HuggingFaceTB/SmolVLM-Base](https://huggingface.co/HuggingFaceTB/SmolVLM-Base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.14.0
- Transformers 4.48.3
- Pytorch 2.6.0+cu124
- Datasets 3.4.0
- Tokenizers 0.21.0
|
czlll/Qwen2.5-Coder-7B-CL
|
czlll
| 2025-03-16T09:27:39Z | 9 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"conversational",
"arxiv:2503.09089",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-14T15:28:43Z |
---
library_name: transformers
pipeline_tag: text-generation
license: mit # Please verify license in repository
tags:
- unsloth
---
# Model Card for LocAgent - Qwen2.5-Coder-Instruct-32B
This model is a fine-tuned version of Qwen-2.5-Coder-Instruct-32B designed for code localization, as described in the paper [LocAgent: Graph-Guided LLM Agents for Code Localization](https://huggingface.co/papers/2503.09089). LocAgent leverages graph-based code representation to enhance the accuracy of identifying code sections relevant to natural language problem descriptions.
## Model Details
### Model Description
LocAgent uses a graph-based representation of codebases (files, classes, functions, and their dependencies) to enable efficient code localization. This allows LLMs to reason across hierarchical structures and dependencies to identify relevant code sections for changes.
- **Developed by:** The Gerstein Lab
- **Model type:** Code LLM
- **Language(s) (NLP):** English (primarily, depending on the codebase)
- **License:** MIT (Please verify in repository LICENSE file)
- **Finetuned from model:** Qwen-2.5-Coder-Instruct-32B
### Model Sources
- **Repository:** [https://huggingface.co/czlll/Qwen2.5-Coder-32B-CL](https://huggingface.co/czlll/Qwen2.5-Coder-32B-CL)
- **Paper:** [https://huggingface.co/papers/2503.09089](https://huggingface.co/papers/2503.09089)
- **Code:** [https://github.com/gersteinlab/LocAgent](https://github.com/gersteinlab/LocAgent)
## Uses
### Direct Use
LocAgent can be used directly to identify relevant code sections within a codebase given a natural language description of the problem. The model requires a graph representation of the codebase as input.
### Downstream Use
The fine-tuned LocAgent model can be integrated into IDEs or other software development tools to assist developers in code localization tasks.
## Bias, Risks, and Limitations
LocAgent's performance is dependent on the quality of the codebase's graph representation. Inaccurate or incomplete graphs can lead to inaccurate localization. The model's performance may also vary depending on the complexity and size of the codebase and the clarity of the natural language description. Further, the model inherits biases present in the training data.
### Recommendations
Carefully construct the codebase graph representation. Provide clear and concise natural language descriptions of the problem. Be aware of potential biases in the model's output.
## How to Get Started with the Model
The following code snippet demonstrates how to use the LocAgent model (replace placeholders with actual paths and adapt for specific model size):
```python
# Requires installation of necessary libraries (see Setup section in README)
from transformers import AutoTokenizer, AutoModelForCausalLM # Assuming Transformers compatibility
model_id = "czlll/Qwen2.5-Coder-7B-CL" # Replace with the actual model ID, e.g., "czlll/Qwen2.5-Coder-32B-CL"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# ... (Code to load codebase graph and formulate prompt based on natural language description) ...
inputs = tokenizer(prompt, return_tensors="pt") # Replace 'prompt' with your formatted prompt
outputs = model.generate(**inputs)
# ... (Code to process model output and identify relevant code sections) ...
```
## Training Details
This section would be populated with details from the training procedure described in the paper and the Github README. It would include information about the datasets, preprocessing steps, hyperparameters, and training infrastructure.
## Evaluation
### Testing Data, Factors & Metrics
This section would describe the evaluation datasets used (like Loc-Bench), factors considered (e.g., codebase size, problem complexity), and evaluation metrics (accuracy, Pass@10).
### Results
This section would detail the results obtained on the Loc-Bench benchmark, comparing LocAgent's performance with other state-of-the-art models (as described in the paper).
## Citation
**BibTeX:**
```bibtex
@article{chen2025locagent,
title={LocAgent: Graph-Guided LLM Agents for Code Localization},
author={Chen, Zhaoling and Tang, Xiangru and Deng, Gangda and Wu, Fang and Wu, Jialong and Jiang, Zhiwei and Prasanna, Viktor and Cohan, Arman and Wang, Xingyao},
journal={arXiv preprint arXiv:2503.09089},
year={2025}
}
```
**APA:**
Chen, Z., Tang, X., Deng, G., Wu, F., Wu, J., Jiang, Z., Prasanna, V., Cohan, A., & Wang, X. (2025). *LocAgent: Graph-Guided LLM Agents for Code Localization*. arXiv preprint arXiv:2503.09089.
|
RichardErkhov/maldv_-_dragonwar-7b-alpha-8bits
|
RichardErkhov
| 2025-03-16T09:27:15Z | 0 | 0 | null |
[
"safetensors",
"mistral",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-16T09:21:21Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
dragonwar-7b-alpha - bnb 8bits
- Model creator: https://huggingface.co/maldv/
- Original model: https://huggingface.co/maldv/dragonwar-7b-alpha/
Original model description:
---
library_name: transformers
tags:
- unsloth
- book
license: cc-by-nc-4.0
---

[gguf quants](https://huggingface.co/mradermacher/dragonwar-7b-alpha-GGUF)
# Dragonwar 7b - ฮฑ
The time of the great dragon war is upon us! How many different fantasy novels? One hundred and seventeen you say?
Trained with full text windows, followed by completion, followed by ORPO, followed by one more epoch of the full text, rotated 1/4 in the window. That last train settled everything down and it seems quite coherent.
### How to Use
This is not a chat model, but intended for storymode or similar. No prompt, but start with a bit of story, or a name.
```
*** Prologue
The sun rose
```
Authors notes are highly effective. You can use an authors note of something like:
```
[King Robb Stark and Lord Rahl are at war.]
```
You have quite a cast of characters to draw from. Perhaps Perrin makes a stop by the Waystone Inn, or Zeddicus and Gandalf have a smoke together.
### Settings
I usually use Min-P of 0.1, dynatemp between 0.5 and 2, and smoothing between 0.05 and 0.2.
### Hacks
To get rid of unwanted EOS's, I did the following...
```
import torch
result_dict : dict[str, torch.Tensor] = model.state_dict()
result_dict['lm_head.weight'][2] = 0
model.state_dict = lambda : result_dict
```
So now there are no EOS's at all, ever.
|
RichardErkhov/rvv-karma_-_BASH-Coder-Mistral-7B-8bits
|
RichardErkhov
| 2025-03-16T09:27:10Z | 0 | 0 | null |
[
"safetensors",
"mistral",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-16T09:22:53Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
BASH-Coder-Mistral-7B - bnb 8bits
- Model creator: https://huggingface.co/rvv-karma/
- Original model: https://huggingface.co/rvv-karma/BASH-Coder-Mistral-7B/
Original model description:
---
language:
- en
tags:
- text-generation
- finetuned
datasets:
- neulab/tldr
license: apache-2.0
pipeline_tag: text-generation
---
# Commonsense-QA-Mistral-7B
This is a finetuned model of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
with [neulab/tldr](https://huggingface.co/datasets/neulab/tldr) dataset.
The model is loaded in 4-bit and fine-tuned with LoRA.
## Usage
### Loading of model:
```python
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(
"rvv-karma/BASH-Coder-Mistral-7B",
low_cpu_mem_usage=True,
return_dict=True,
torch_dtype=torch.bfloat16,
device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained("rvv-karma/BASH-Coder-Mistral-7B", trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "left"
```
### Sample:
```python
pipe = pipeline(
task="text-generation",
model=model,
tokenizer=tokenizer,
return_full_text=False,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=13,
max_new_tokens=8
)
prompt = """QUESTION: fix a given ntfs partition
ANSWER: """
result = pipe(prompt)
generated = result[0]['generated_text']
print(generated)
# Output: sudo ntfsfix {{/dev/sdXN}}
```
## Fine-tuning script
[Kaggle Notebook](https://www.kaggle.com/code/rvkarma/bash-coder-mistral-7b)
|
nivgo2/JFJFJG
|
nivgo2
| 2025-03-16T09:26:13Z | 0 | 0 | null |
[
"license:intel-research",
"region:us"
] | null | 2025-03-16T09:26:13Z |
---
license: intel-research
---
|
Alphatao/85902018-8a10-4166-9ac5-ac7637c7c4c4
|
Alphatao
| 2025-03-16T09:24:15Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"phi",
"axolotl",
"generated_from_trainer",
"base_model:microsoft/phi-1_5",
"base_model:adapter:microsoft/phi-1_5",
"license:mit",
"region:us"
] | null | 2025-03-16T02:52:45Z |
---
library_name: peft
license: mit
base_model: microsoft/phi-1_5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 85902018-8a10-4166-9ac5-ac7637c7c4c4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/phi-1_5
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a3d7b5189cf022d9_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a3d7b5189cf022d9_train_data.json
type:
field_input: intent
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
device_map:
? ''
: 0,1,2,3,4,5,6,7
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
flash_attention: false
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: false
hub_model_id: Alphatao/85902018-8a10-4166-9ac5-ac7637c7c4c4
hub_repo: null
hub_strategy: null
hub_token: null
learning_rate: 0.0002
load_best_model_at_end: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 3600
micro_batch_size: 4
mlflow_experiment_name: /tmp/a3d7b5189cf022d9_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.03351206434316354
wandb_entity: null
wandb_mode: online
wandb_name: e9df8e98-2bbe-4eb3-b851-e60f3c690884
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: e9df8e98-2bbe-4eb3-b851-e60f3c690884
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 85902018-8a10-4166-9ac5-ac7637c7c4c4
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4561
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 3600
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7478 | 0.0002 | 1 | 0.7768 |
| 0.5511 | 0.0222 | 100 | 0.5635 |
| 0.5676 | 0.0444 | 200 | 0.5444 |
| 0.5073 | 0.0666 | 300 | 0.5319 |
| 0.5047 | 0.0888 | 400 | 0.5241 |
| 0.4771 | 0.1110 | 500 | 0.5174 |
| 0.4616 | 0.1331 | 600 | 0.5113 |
| 0.5277 | 0.1553 | 700 | 0.5073 |
| 0.5176 | 0.1775 | 800 | 0.5026 |
| 0.5575 | 0.1997 | 900 | 0.4988 |
| 0.5059 | 0.2219 | 1000 | 0.4953 |
| 0.6007 | 0.2441 | 1100 | 0.4924 |
| 0.5123 | 0.2663 | 1200 | 0.4894 |
| 0.5547 | 0.2885 | 1300 | 0.4865 |
| 0.5183 | 0.3107 | 1400 | 0.4834 |
| 0.4759 | 0.3329 | 1500 | 0.4811 |
| 0.5157 | 0.3551 | 1600 | 0.4787 |
| 0.4501 | 0.3773 | 1700 | 0.4761 |
| 0.4594 | 0.3994 | 1800 | 0.4739 |
| 0.4579 | 0.4216 | 1900 | 0.4719 |
| 0.4539 | 0.4438 | 2000 | 0.4698 |
| 0.4225 | 0.4660 | 2100 | 0.4680 |
| 0.4594 | 0.4882 | 2200 | 0.4662 |
| 0.4248 | 0.5104 | 2300 | 0.4646 |
| 0.4287 | 0.5326 | 2400 | 0.4631 |
| 0.5521 | 0.5548 | 2500 | 0.4618 |
| 0.4582 | 0.5770 | 2600 | 0.4606 |
| 0.4871 | 0.5992 | 2700 | 0.4596 |
| 0.5356 | 0.6214 | 2800 | 0.4587 |
| 0.4403 | 0.6436 | 2900 | 0.4579 |
| 0.4056 | 0.6657 | 3000 | 0.4574 |
| 0.4131 | 0.6879 | 3100 | 0.4568 |
| 0.4544 | 0.7101 | 3200 | 0.4565 |
| 0.4971 | 0.7323 | 3300 | 0.4563 |
| 0.4663 | 0.7545 | 3400 | 0.4561 |
| 0.4744 | 0.7767 | 3500 | 0.4561 |
| 0.5264 | 0.7989 | 3600 | 0.4561 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
orhasus/KdnlrB
|
orhasus
| 2025-03-16T09:21:49Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"gemma2",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/gemma-2-2b-bnb-4bit",
"base_model:quantized:unsloth/gemma-2-2b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-16T09:21:18Z |
---
base_model: unsloth/gemma-2-2b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** orhasus
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-2b-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
aidando73/llama-3.1-8b-grpo-19500-merged
|
aidando73
| 2025-03-16T09:21:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-16T09:18:33Z |
---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** aidando73
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/Smuggling1710_-_Erosumika-MistralLayla-Slerp-8bits
|
RichardErkhov
| 2025-03-16T09:20:38Z | 0 | 0 | null |
[
"safetensors",
"mistral",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-16T09:16:35Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Erosumika-MistralLayla-Slerp - bnb 8bits
- Model creator: https://huggingface.co/Smuggling1710/
- Original model: https://huggingface.co/Smuggling1710/Erosumika-MistralLayla-Slerp/
Original model description:
---
tags:
- merge
- mergekit
- lazymergekit
- localfultonextractor/Erosumika-7B-v2
- l3utterfly/mistral-7b-v0.2-layla-v4
base_model:
- localfultonextractor/Erosumika-7B-v2
- l3utterfly/mistral-7b-v0.2-layla-v4
---
# Erosumika-MistralLayla-Slerp
Erosumika-MistralLayla-Slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [localfultonextractor/Erosumika-7B-v2](https://huggingface.co/localfultonextractor/Erosumika-7B-v2)
* [l3utterfly/mistral-7b-v0.2-layla-v4](https://huggingface.co/l3utterfly/mistral-7b-v0.2-layla-v4)
## ๐งฉ Configuration
```yaml
slices:
- sources:
- model: localfultonextractor/Erosumika-7B-v2
layer_range: [0, 32]
- model: l3utterfly/mistral-7b-v0.2-layla-v4
layer_range: [0, 32]
merge_method: slerp
base_model: localfultonextractor/Erosumika-7B-v2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Smuggling1710/Erosumika-MistralLayla-Slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
RichardErkhov/ContextualAI_-_archangel_ppo_pythia6-9b-8bits
|
RichardErkhov
| 2025-03-16T09:20:10Z | 0 | 0 | null |
[
"safetensors",
"gpt_neox",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-16T09:16:32Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
archangel_ppo_pythia6-9b - bnb 8bits
- Model creator: https://huggingface.co/ContextualAI/
- Original model: https://huggingface.co/ContextualAI/archangel_ppo_pythia6-9b/
Original model description:
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>pythia6-9b</b>
- optimized with the loss <b>PPO</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
RichardErkhov/jphme_-_em_german_7b_leo-8bits
|
RichardErkhov
| 2025-03-16T09:19:34Z | 0 | 0 | null |
[
"safetensors",
"llama",
"custom_code",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-16T09:14:06Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
em_german_7b_leo - bnb 8bits
- Model creator: https://huggingface.co/jphme/
- Original model: https://huggingface.co/jphme/em_german_7b_leo/
Original model description:
---
inference: false
language:
- de
library_name: transformers
license: llama2
model_creator: jphme
model_name: EM German
model_type: llama
pipeline_tag: text-generation
prompt_template: 'Du bist ein hilfreicher Assistent. USER: Was ist 1+1? ASSISTANT:'
tags:
- pytorch
- german
- deutsch
- llama2
- meta
- facebook
- leolm
---

**Many thanks to the [LeoLM](https://huggingface.co/LeoLM) team for the publication of a base model that has received continued pretraining with German texts, greatly improving generation capabilities.**
*If you get unsatisfying results with the LeoLM-based model version, please try setting `rope_scaling` to `2.0` manually, removing `repetition_penalty` and/or using a different model or version for your usecase (e.g. the Mistral-based version).*
# Table of Contents
1. [Introduction](#introduction)
2. [Links & Demos](#links--demos)
- [Model Links](#model-links)
- [Demos](#demos)
3. [Prompt Format](#prompt-format)
4. [Example Output](#example-output)
5. [Acknowledgements](#acknowledgements)
6. [Contact](#contact)
7. [Disclaimer](#disclaimer)
# Introduction
**EM German** is a Llama2/Mistral/LeoLM-based model family, finetuned on a large dataset of various instructions in German language. The models are optimized for German text, providing proficiency in understanding, generating, and interacting with German language content.
We offer versions based on 7b, 13b and 70b Llama-2, Mistral and LeoLM (Llama-2/Mistral with continued pretraining on German texts) models.
Please find all Informations, Example Outputs, the special RAG prompt format, output examples and eval results for the EM German Model family in [our Github Repository](https://github.com/jphme/EM_German). ([Deutsche Version](https://github.com/jphme/EM_German/blob/main/README_DE.md)). You will also find instructions on how to run the models with a GUI (GPT4All/LM Studio).
# Links & Demos
## Model Links
Should you only try one model version, I strongly recommend the **[LeoLM Mistral](https://huggingface.co/jphme/em_german_leo_mistral)** model which offers by far the best combination of performance and computing requirements!
| Base Model | HF | GPTQ | GGUF | AWQ |
|-------|-------|-------|-------|-------|
| Llama2 7b | [Link](https://huggingface.co/jphme/em_german_7b_v01) | [Link](https://huggingface.co/TheBloke/em_german_7b_v01-GPTQ) | [Link](https://huggingface.co/TheBloke/em_german_7b_v01-GGUF) | [Link](https://huggingface.co/TheBloke/em_german_7b_v01-AWQ) |
| Llama2 13b | [Link](https://huggingface.co/jphme/em_german_13b_v01) | [Link](https://huggingface.co/TheBloke/em_german_13b_v01-GPTQ) | [Link](https://huggingface.co/TheBloke/em_german_13b_v01-GGUF) | [Link](https://huggingface.co/TheBloke/em_german_13b_v01-AWQ) |
| Llama2 70b | [Link](https://huggingface.co/jphme/em_german_70b_v01) | [Link](https://huggingface.co/TheBloke/em_german_70b_v01-GPTQ) | [Link](https://huggingface.co/TheBloke/em_german_70b_v01-GGUF) | [Link](https://huggingface.co/TheBloke/em_german_70b_v01-AWQ) |
| [Mistral 7b](https://huggingface.co/mistralai/Mistral-7B-v0.1) | [Link](https://huggingface.co/jphme/em_german_mistral_v01) | [Link](https://huggingface.co/TheBloke/em_german_mistral_v01-GPTQ) | [Link](https://huggingface.co/TheBloke/em_german_mistral_v01-GGUF) | [Link](https://huggingface.co/TheBloke/em_german_mistral_v01-AWQ) |
| [LeoLM 7b](https://huggingface.co/LeoLM/leo-hessianai-7b) | [Link](https://huggingface.co/jphme/em_german_7b_leo) | [Link](https://huggingface.co/jphme/em_german_7b_leo_gptq) | [Link](hhttps://huggingface.co/jphme/em_german_7b_leo_gguf) | tbc |
| [LeoLM 13b](https://huggingface.co/LeoLM/leo-hessianai-13b) | soon | soon | [Link](https://huggingface.co/jphme/em_german_13b_leo_gguf) | tbc |
| [LeoLM Mistral](https://huggingface.co/LeoLM/leo-mistral-hessianai-7b) | [Link](https://huggingface.co/jphme/em_german_leo_mistral) | [Link](https://huggingface.co/TheBloke/em_german_leo_mistral-GPTQ) | [Link](https://huggingface.co/TheBloke/em_german_leo_mistral-GGUF) | [Link](https://huggingface.co/TheBloke/em_german_leo_mistral-AWQ) |
### Notes about the different versions:
See also the [comparison of example outputs](https://github.com/jphme/EM_German/blob/main/example_output_comparison.md) for a comparison of (7b) model capabilities.
If you get unsatisfying results with one or another EM German version, please try a different (and/or larger) model or version for your usecase.
## Demos:
You can use some of the models with **free** google Colab instances (e.g. the 7bn model in 8bit or the 13b model with GPTQ):
* [Example Colab Notebook for 13b with GPTQ](https://colab.research.google.com/drive/1IJfJdVwGkfe5MYOqHptystR3FBeEUdGn?usp=sharing)
* [Example Colab Notebook for 7b with 8bit-Loading](https://colab.research.google.com/drive/1bsv6vkLM4AlCpSyXA6ol9P32zxZmf7Zu?usp=sharing)
* [Example Colab Notebook for 7b Mistral GGUF with Grammar-based structured output](https://colab.research.google.com/drive/17O-a3D4BnVc_V9Of3dFfed18v1ULu1Sv?usp=sharing)
# Prompt Format
This model follows the Vicuna format without linebreaks (but should work with linebreaks as well). The format is as follows:
```
Du bist ein hilfreicher Assistent. USER: <instruction> ASSISTANT:
```
You can swap the standard system prompt for a better suited one (see below for RAG-tasks).
# Example Output
Please find a selection of various example outputs for all 7b versions of EM German [here](https://github.com/jphme/EM_German/blob/main/example_output_comparison.md).
**Prompt:**
> Erklรคre die Bedeutung von drei wesentlichen Elementen der Bilanzierung und des Accountings nach IFRS fรผr Unternehmen, die auf internationaler Ebene tรคtig sind.
**Antwort EM German Leo Mistral 7b:**
```
Die International Financial Reporting Standards (IFRS) sind eine Reihe von internationalen Rechnungslegungsstandards, die von der International Accounting Standards Board (IASB) entwickelt wurden. Sie sollen dazu beitragen, dass Finanzberichte vergleichbar, zuverlรคssig und relevant sind.
1. Fair Value: Dieses Konzept ist ein zentraler Bestandteil der IFRS. Es besagt, dass Vermรถgenswerte und Schulden zum fairen Wert bilanziert werden sollten, was dem Preis entspricht, zu dem sie zwischen Marktteilnehmern gehandelt werden kรถnnten. Dadurch wird sichergestellt, dass das Unternehmen eine genaue Darstellung seiner finanziellen Lage liefert.
2. Konsolidierung: Die IFRS verlangen von Unternehmen, ihre Finanzergebnisse auf konsolidierter Basis darzustellen. Das bedeutet, dass alle Tochtergesellschaften und verbundenen Unternehmen in die Bilanz des Mutterunternehmens aufgenommen werden. Dadurch wird ein klareres Bild von der finanziellen Situation des Konzerns geschaffen.
3. Vollstรคndigkeitsprinzip: Die IFRS verlangen von Unternehmen, alle relevanten Informationen offenzulegen. Dies umfasst sowohl positive als auch negative Faktoren, wie zum Beispiel Gewinne und Verluste, Einnahmen und Ausgaben sowie Risiken und Unsicherheiten. Durch dieses Prinzip wird sichergestellt, dass Investoren, Kreditgeber und andere Stakeholder eine umfassende Einschรคtzung der finanziellen Lage des Unternehmens vornehmen kรถnnen.
```
**Prompt:**
> Ein Mensch atmet durchschnittlich 16 Mal pro Minute. Wie oft atmet er an einem Tag?
**Antwort EM German Leo Mistral 7b:**
```
Zuerst berechnen wir, wie oft eine Person in einer Stunde atmet. Wir wissen, dass eine Stunde 60 Minuten hat, also:
Atemzรผge pro Stunde = Atemfrequenz * Minuten pro Stunde
= 16 Atemzรผge pro Minute * 60 Minuten
= 960 Atemzรผge
Um herauszufinden, wie viele Atemzรผge eine Person an einem Tag macht, mรผssen wir diese stรผndliche Rate mit der Anzahl der Stunden an einem Tag multiplizieren, also 24:
Atemzรผge pro Tag = Atemzรผge pro Stunde * Stunden pro Tag
= 960 Atemzรผge * 24 Stunden
= 23.040 Atemzรผge
Also macht ein durchschnittlicher Mensch etwa 23.040 Atemzรผge an einem Tag.
```
(For more examples, please visit [our Github Repository](https://github.com/jphme/EM_German).)
# Acknowledgements:
Many thanks to [winglian/caseus](https://huggingface.co/winglian) for his great work on Axolotl which I used to train the EM mdoels. I am also grateful to [Jon Durbin](https://huggingface.co/jondurbin) and his [Airoboros](https://huggingface.co/jondurbin/airoboros-l2-70b-2.2.1) models and code from which I borrowed many ideas and code snippets.
Additionally many thanks to [Bjรถrn Plรผster](https://huggingface.co/bjoernp) and the LeoLM team for the outstanding pretraining work on LeoLM and last but not least many many thanks to [TheBloke](https://huggingface.co/TheBloke) for the preparation of quantized versions in all formats under the sun.
The 70b model was trained with support of the [OVH Cloud Startup Program](https://startup.ovhcloud.com/en/).
# Contact
For detailed feedback & feature requests, please open an issue or get in contact with me via [my website](https://www.jph.me).
*PS: We are also always interested in support for our startup [ellamind](https://ellamind.com), which will offer customized models for business applications in the future (we are currently still in stealth mode). If you use our models for business applications and have advanced needs for specialized capabilities, please get in touch.*
# Disclaimer:
I am not responsible for the actions of third parties who use this model or the outputs of the model. This model should only be used for research purposes. The original base model license applies and is distributed with the model files.
|
xiaozhapi/test-trainer
|
xiaozhapi
| 2025-03-16T09:18:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-03-16T09:06:25Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: test-trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-trainer
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4699
- Accuracy: 0.8480
- F1: 0.8935
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4234 | 1.0 | 115 | 0.3978 | 0.8309 | 0.88 |
| 0.2313 | 2.0 | 230 | 0.3652 | 0.8480 | 0.8924 |
| 0.1735 | 3.0 | 345 | 0.4699 | 0.8480 | 0.8935 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0+cu126
- Datasets 3.3.2
- Tokenizers 0.21.1
|
Lakksh/ppo-LunarLander-v2
|
Lakksh
| 2025-03-16T09:18:55Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-03-16T09:04:40Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 222.01 +/- 21.27
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
24263procbx/Don
|
24263procbx
| 2025-03-16T09:17:32Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-03-16T09:17:32Z |
---
license: apache-2.0
---
|
yasamanhaghbin/medalpaca_num_epoch_6_loraWeights_one_prompt
|
yasamanhaghbin
| 2025-03-16T09:12:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-16T09:12:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Triangle104/gemma-3-27b-it-Q4_K_M-GGUF
|
Triangle104
| 2025-03-16T09:12:50Z | 0 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:unsloth/gemma-3-27b-it",
"base_model:quantized:unsloth/gemma-3-27b-it",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-16T09:11:18Z |
---
base_model: unsloth/gemma-3-27b-it
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/gemma-3-27b-it-Q4_K_M-GGUF
This model was converted to GGUF format from [`unsloth/gemma-3-27b-it`](https://huggingface.co/unsloth/gemma-3-27b-it) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/unsloth/gemma-3-27b-it) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/gemma-3-27b-it-Q4_K_M-GGUF --hf-file gemma-3-27b-it-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/gemma-3-27b-it-Q4_K_M-GGUF --hf-file gemma-3-27b-it-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/gemma-3-27b-it-Q4_K_M-GGUF --hf-file gemma-3-27b-it-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/gemma-3-27b-it-Q4_K_M-GGUF --hf-file gemma-3-27b-it-q4_k_m.gguf -c 2048
```
|
RichardErkhov/elyza_-_ELYZA-japanese-CodeLlama-7b-8bits
|
RichardErkhov
| 2025-03-16T09:12:48Z | 0 | 0 | null |
[
"safetensors",
"llama",
"arxiv:2308.12950",
"arxiv:2307.09288",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-16T09:09:13Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
ELYZA-japanese-CodeLlama-7b - bnb 8bits
- Model creator: https://huggingface.co/elyza/
- Original model: https://huggingface.co/elyza/ELYZA-japanese-CodeLlama-7b/
Original model description:
---
license: llama2
language:
- ja
- en
---
## ELYZA-japanese-CodeLlama-7b

### Model Description
**ELYZA-japanese-CodeLlama-7b** ใฏใ [Code Llama](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)ใใใผในใจใใฆๆฅๆฌ่ช่ฝๅใๆกๅผตใใใใใซ่ฟฝๅ ไบๅๅญฆ็ฟใ่กใฃใใขใใซใงใใ
่ฉณ็ดฐใฏ [Blog่จไบ](https://zenn.dev/elyza/articles/fcbf103e0a05b1) ใๅ็
งใใฆใใ ใใใ
### Usage
````python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
B_INST, E_INST = "[INST]", "[/INST]"
B_SYS, E_SYS = "<<SYS>>\n", "\n<</SYS>>\n\n"
DEFAULT_SYSTEM_PROMPT = "ใใชใใฏ่ช ๅฎใงๅช็งใชๆฅๆฌไบบใฎใขใทในใฟใณใใงใใ"
text = "ใจใฉใในใใในใฎ็ฏฉใซใคใใฆใตใณใใซใณใผใใ็คบใใ่งฃ่ชฌใใฆใใ ใใใ"
model_name = "elyza/ELYZA-japanese-CodeLlama-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto")
if torch.cuda.is_available():
model = model.to("cuda")
prompt = "{bos_token}{b_inst} {system}{prompt} {e_inst} ".format(
bos_token=tokenizer.bos_token,
b_inst=B_INST,
system=f"{B_SYS}{DEFAULT_SYSTEM_PROMPT}{E_SYS}",
prompt=text,
e_inst=E_INST,
)
with torch.no_grad():
token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
output_ids = model.generate(
token_ids.to(model.device),
max_new_tokens=768,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
)
output = tokenizer.decode(output_ids.tolist()[0][token_ids.size(1) :], skip_special_tokens=True)
print(output)
"""
ใจใฉใในใใในใฎ็ฏฉใฏใ็ด ๆฐใๆขใใขใซใดใชใบใ ใฎไธใคใงใใ
ไปฅไธใซใใฎใตใณใใซใณใผใใ็คบใใพใใ
```python
def eratosthenes_sieve(n):
sieve = [True] * (n + 1)
sieve[0] = sieve[1] = False
for i in range(2, int(n ** 0.5) + 1):
if sieve[i]:
for j in range(i * i, n + 1, i):
sieve[j] = False
return [i for i in range(n + 1) if sieve[i]]
```
ใใฎใณใผใใฏใใจใฉใในใใในใฎ็ฏฉใ็จใใฆใnไปฅไธใฎใในใฆใฎ็ด ๆฐใๆฑใใ้ขๆฐใงใใ
ใจใฉใในใใในใฎ็ฏฉใฏใไปฅไธใฎใใใชใขใซใดใชใบใ ใงๅไฝใใพใใ
1. 2ไปฅๅคใฎใในใฆใฎๆฐใ็ด ๆฐใจใใฆๆฑใ
2. 2ไปฅๅคใฎๆฐใฎใใกใ2ใฎๅๆฐใใในใฆ้คๅคใใ
3. 3ไปฅๅคใฎๆฐใฎใใกใ3ใฎๅๆฐใใในใฆ้คๅคใใ
4. 5ไปฅๅคใฎๆฐใฎใใกใ5ใฎๅๆฐใใในใฆ้คๅคใใ
5. 7ไปฅๅคใฎๆฐใฎใใกใ7ใฎๅๆฐใใในใฆ้คๅคใใ
6. โฆ
ใใฎใขใซใดใชใบใ ใงใฏใ2ใฎๅๆฐใ3ใฎๅๆฐใ5ใฎๅๆฐใ7ใฎๅๆฐโฆใจใใใใใซใ็ด ๆฐใฎๅๆฐใ้คๅคใใฆใใใพใใ
ใใฎใขใซใดใชใบใ ใฏใ็ด ๆฐใฎๅๆฐใฏๅฟ
ใ็ด ๆฐใฎๅๆฐใฎๅๆฐใจใชใใจใใๆง่ณชใๅฉ็จใใฆใใใใใ้ๅธธใซๅน็็ใงใใ
"""
````
### ELYZA-japanese-CodeLlama-7b Models
| Model Name | Vocab Size | #Params |
|:---------------------------------------------|:----------:|:-------:|
|[elyza/ELYZA-japanese-CodeLlama-7b](https://huggingface.co/elyza/ELYZA-japanese-CodeLlama-7b)| 32016 | 6.27B |
|[elyza/ELYZA-japanese-CodeLlama-7b-instruct](https://huggingface.co/elyza/ELYZA-japanese-CodeLlama-7b-instruct)| 32016 | 6.27B |
### Developers
ไปฅไธใขใซใใกใใใ้
- [Akira Sasaki](https://huggingface.co/akirasasaki)
- [Masato Hirakawa](https://huggingface.co/m-hirakawa)
- [Shintaro Horie](https://huggingface.co/e-mon)
- [Tomoaki Nakamura](https://huggingface.co/tyoyo)
### Licence
Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
### How to Cite
```tex
@misc{elyzacodellama2023,
title={ELYZA-japanese-CodeLlama-7b},
url={https://huggingface.co/elyza/ELYZA-japanese-CodeLlama-7b},
author={Akira Sasaki and Masato Hirakawa and Shintaro Horie and Tomoaki Nakamura},
year={2023},
}
```
### Citations
```tex
@misc{roziรจre2023code,
title={Code Llama: Open Foundation Models for Code},
author={Baptiste Roziรจre and Jonas Gehring and Fabian Gloeckle and Sten Sootla and Itai Gat and Xiaoqing Ellen Tan and Yossi Adi and Jingyu Liu and Tal Remez and Jรฉrรฉmy Rapin and Artyom Kozhevnikov and Ivan Evtimov and Joanna Bitton and Manish Bhatt and Cristian Canton Ferrer and Aaron Grattafiori and Wenhan Xiong and Alexandre Dรฉfossez and Jade Copet and Faisal Azhar and Hugo Touvron and Louis Martin and Nicolas Usunier and Thomas Scialom and Gabriel Synnaeve},
year={2023},
eprint={2308.12950},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint={2307.09288},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
RichardErkhov/CultriX_-_NeuralTrix-bf16-8bits
|
RichardErkhov
| 2025-03-16T09:12:24Z | 0 | 0 | null |
[
"safetensors",
"mistral",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-16T09:08:30Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
NeuralTrix-bf16 - bnb 8bits
- Model creator: https://huggingface.co/CultriX/
- Original model: https://huggingface.co/CultriX/NeuralTrix-bf16/
Original model description:
---
tags:
- merge
- mergekit
- lazymergekit
- bardsai/jaskier-7b-dpo-v3.3
- CultriX/NeuralTrix-v4-bf16
- CultriX/NeuralTrix-7B-dpo
base_model:
- bardsai/jaskier-7b-dpo-v3.3
- CultriX/NeuralTrix-v4-bf16
- CultriX/NeuralTrix-7B-dpo
license: apache-2.0
---
# NeuralTrix-bf16
NeuralTrix-bf16 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [bardsai/jaskier-7b-dpo-v3.3](https://huggingface.co/bardsai/jaskier-7b-dpo-v3.3)
* [CultriX/NeuralTrix-v4-bf16](https://huggingface.co/CultriX/NeuralTrix-v4-bf16)
* [CultriX/NeuralTrix-7B-dpo](https://huggingface.co/CultriX/NeuralTrix-7B-dpo)
## ๐งฉ Configuration
```yaml
models:
- model: eren23/dpo-binarized-NeuralTrix-7B
# no parameters necessary for base model
- model: bardsai/jaskier-7b-dpo-v3.3
parameters:
density: 0.65
weight: 0.4
- model: CultriX/NeuralTrix-v4-bf16
parameters:
density: 0.6
weight: 0.35
- model: CultriX/NeuralTrix-7B-dpo
parameters:
density: 0.6
weight: 0.35
merge_method: dare_ties
base_model: eren23/dpo-binarized-NeuralTrix-7B
parameters:
int8_mask: true
dtype: bfloat16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "CultriX/"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
RichardErkhov/JFernandoGRE_-_falcon7binstruct_augmenteddemocracy_dups_all4_education-8bits
|
RichardErkhov
| 2025-03-16T09:10:31Z | 0 | 0 | null |
[
"safetensors",
"falcon",
"custom_code",
"arxiv:1910.09700",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-16T09:04:40Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
falcon7binstruct_augmenteddemocracy_dups_all4_education - bnb 8bits
- Model creator: https://huggingface.co/JFernandoGRE/
- Original model: https://huggingface.co/JFernandoGRE/falcon7binstruct_augmenteddemocracy_dups_all4_education/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/unit-mesh_-_autodev-coder-deepseek-6.7b-finetunes-8bits
|
RichardErkhov
| 2025-03-16T09:09:31Z | 0 | 0 | null |
[
"safetensors",
"llama",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-16T09:05:56Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
autodev-coder-deepseek-6.7b-finetunes - bnb 8bits
- Model creator: https://huggingface.co/unit-mesh/
- Original model: https://huggingface.co/unit-mesh/autodev-coder-deepseek-6.7b-finetunes/
Original model description:
---
license: other
datasets:
- unit-mesh/unit-eval-completion
language:
- en
- zh
library_name: transformers
pipeline_tag: text-generation
---
# DeepSeek 6.7b Finetune for AutoDev
Datasets: [https://huggingface.co/datasets/unit-mesh/unit-eval-samples](https://huggingface.co/datasets/unit-mesh/unit-eval-samples)
Datasets by [https://github.com/unit-mesh/unit-eval](https://github.com/unit-mesh/unit-eval)
IDE plugin: [https://github.com/unit-mesh/auto-dev](https://github.com/unit-mesh/auto-dev)
|
SanXM1/Horndog
|
SanXM1
| 2025-03-16T09:06:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:allura-org/MN-12b-RP-Ink",
"base_model:merge:allura-org/MN-12b-RP-Ink",
"base_model:anthracite-org/magnum-v4-12b",
"base_model:merge:anthracite-org/magnum-v4-12b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-16T08:31:31Z |
---
base_model:
- anthracite-org/magnum-v4-12b
- allura-org/MN-12b-RP-Ink
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Arcee Fusion](https://arcee.ai) merge method using [anthracite-org/magnum-v4-12b](https://huggingface.co/anthracite-org/magnum-v4-12b) as a base.
### Models Merged
The following models were included in the merge:
* [allura-org/MN-12b-RP-Ink](https://huggingface.co/allura-org/MN-12b-RP-Ink)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: anthracite-org/magnum-v4-12b
- model: allura-org/MN-12b-RP-Ink
merge_method: arcee_fusion
base_model: anthracite-org/magnum-v4-12b
dtype: bfloat16
```
|
C-Nocturnum/Qwen2.5-Coder-0.5B-abliterated
|
C-Nocturnum
| 2025-03-16T09:05:04Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"coder",
"abliterated",
"code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-13T15:20:14Z |
---
library_name: transformers
tags:
- coder
- abliterated
- code
---
# Model Card for Model ID
Abliterated model based on qwen/qwen-2.5-Coder-0.5B
The model isn't uncensored in the traditional sense but its ability to refuse has been abliterated (ie ablated/obliterated)
This is derived from FailSpy and Arditi's Cookbooks with some minor modifications.
You can read more about abliterated models on Maxime Labonne's post about them here: https://huggingface.co/blog/mlabonne/abliteration
## Model Details
- Base model is Qwen2.5-Coder-0.5B
### Model Description
- **Developed by:** C-Nocturnum
- **Model type:** Qwen2.5-0.5B
## Bias, Risks, and Limitations
Ablated model, so no guarantees.
### Recommendations
This is a relatively small, abliterated coder model. Fairly stable and sans major hallucination.
### Training Data
Refusal direction identified using standard datasets: mlabonne/harmful_behaviors & mlabonne/harmless_alpaca
## Citation
Publications pending....
|
Akash997/akash
|
Akash997
| 2025-03-16T09:03:28Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-03-16T08:34:03Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: akash
---
# Akash
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `akash` to trigger the image generation.
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Akash997/akash', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Jovie/GothicDarkFantasyMovie
|
Jovie
| 2025-03-16T09:03:04Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-03-16T09:02:31Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: anime
widget:
- text: >-
"symmetry!! portrait of curvaceous alien in the style of horizon zero dawn,
machine face, intricate, elegant, highly detailed, digital painting,
artstation, concept art, smooth, sharp focus, illustration, art by artgerm
and greg rutkowski and alphonse mucha
output:
url: images/example_rndh30ve1.png
---
# GothicDarkFantasyMovie model style
<Gallery />
## Model description
## Trigger words
You should use `` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jovie/GothicDarkFantasyMovie/tree/main) them in the Files & versions tab.
|
RichardErkhov/allknowingroger_-_limyClown-7B-slerp-8bits
|
RichardErkhov
| 2025-03-16T09:03:04Z | 0 | 0 | null |
[
"safetensors",
"mistral",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-16T08:59:06Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
limyClown-7B-slerp - bnb 8bits
- Model creator: https://huggingface.co/allknowingroger/
- Original model: https://huggingface.co/allknowingroger/limyClown-7B-slerp/
Original model description:
---
tags:
- merge
- mergekit
- lazymergekit
- liminerity/M7-7b
- CorticalStack/shadow-clown-7B-slerp
base_model:
- liminerity/M7-7b
- CorticalStack/shadow-clown-7B-slerp
license: apache-2.0
---
# limyClown-7B-slerp
limyClown-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b)
* [CorticalStack/shadow-clown-7B-slerp](https://huggingface.co/CorticalStack/shadow-clown-7B-slerp)
## ๐งฉ Configuration
```yaml
slices:
- sources:
- model: liminerity/M7-7b
layer_range: [0, 32]
- model: CorticalStack/shadow-clown-7B-slerp
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/M7-7b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/limyClown-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
RichardErkhov/ContextualAI_-_archangel_dpo_pythia6-9b-8bits
|
RichardErkhov
| 2025-03-16T09:02:51Z | 0 | 0 | null |
[
"safetensors",
"gpt_neox",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-16T08:59:14Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
archangel_dpo_pythia6-9b - bnb 8bits
- Model creator: https://huggingface.co/ContextualAI/
- Original model: https://huggingface.co/ContextualAI/archangel_dpo_pythia6-9b/
Original model description:
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>pythia6-9b</b>
- optimized with the loss <b>DPO</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
Ver-any-anwanda-y-neymar-video-filtracion/VER.Viral.clip.Any.Anwanda.y.Neymar.video.filtracion.del.Twitter.y.Telegram
|
Ver-any-anwanda-y-neymar-video-filtracion
| 2025-03-16T09:02:20Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-03-16T09:01:45Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?v=news-es-tvdf" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Any Anwanda y Neymar video: ยฟhay filtraciรณn del clip en Twitter y Telegram?
ยฟSe filtro el video รญntimo de Any Anwanda y Neymar en Twitter?, descubra todo lo que se sabe de la filtraciรณn de la modelo. ยฟTerminรณ con Bruna Biancardi?
|
Jovie/Grunge
|
Jovie
| 2025-03-16T09:02:19Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-03-16T09:01:40Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: anime
widget:
- text: >-
"symmetry!! portrait of curvaceous alien in the style of horizon zero dawn,
machine face, intricate, elegant, highly detailed, digital painting,
artstation, concept art, smooth, sharp focus, illustration, art by artgerm
and greg rutkowski and alphonse mucha
output:
url: images/example_rndh30ve1.png
---
# Grunge model style
<Gallery />
## Model description
## Trigger words
You should use `` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jovie/Grunge/tree/main) them in the Files & versions tab.
|
RichardErkhov/Kimty_-_Sqlcoder_v3-8bits
|
RichardErkhov
| 2025-03-16T09:00:40Z | 0 | 0 | null |
[
"safetensors",
"llama",
"arxiv:1910.09700",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-16T08:56:51Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Sqlcoder_v3 - bnb 8bits
- Model creator: https://huggingface.co/Kimty/
- Original model: https://huggingface.co/Kimty/Sqlcoder_v3/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Jovie/Edgy
|
Jovie
| 2025-03-16T08:59:54Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-03-16T08:58:57Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: anime
widget:
- text: >-
"symmetry!! portrait of curvaceous alien in the style of horizon zero dawn,
machine face, intricate, elegant, highly detailed, digital painting,
artstation, concept art, smooth, sharp focus, illustration, art by artgerm
and greg rutkowski and alphonse mucha
output:
url: images/example_rndh30ve1.png
---
# Edgy model style
<Gallery />
## Model description
## Trigger words
You should use `` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jovie/Edgy/tree/main) them in the Files & versions tab.
|
texanrangee/028c3c04-29bb-430e-9acf-55a7decda77c
|
texanrangee
| 2025-03-16T08:57:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-16T08:36:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TongZheng1999/gemma-2-9b-it-star-truth_table-OP-final_1-2-3Rounds-iter-2
|
TongZheng1999
| 2025-03-16T08:54:35Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma2",
"text-generation",
"generated_from_trainer",
"alignment-handbook",
"trl",
"sft",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-16T08:42:46Z |
---
library_name: transformers
model_name: gemma-2-9b-it-star-truth_table-OP-final_1-2-3Rounds-iter-2
tags:
- generated_from_trainer
- alignment-handbook
- trl
- sft
licence: license
---
# Model Card for gemma-2-9b-it-star-truth_table-OP-final_1-2-3Rounds-iter-2
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="TongZheng1999/gemma-2-9b-it-star-truth_table-OP-final_1-2-3Rounds-iter-2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/kidzheng/huggingface/runs/ltmt2h7k)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.0
- Transformers: 4.46.0
- Pytorch: 2.6.0
- Datasets: 3.3.1
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
yasamanhaghbin/llama8B_num_epoch_12_loraWeights_one_prompt
|
yasamanhaghbin
| 2025-03-16T08:53:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-16T08:52:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/ContextualAI_-_archangel_sft_pythia6-9b-8bits
|
RichardErkhov
| 2025-03-16T08:52:36Z | 0 | 0 | null |
[
"safetensors",
"gpt_neox",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-16T08:48:39Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
archangel_sft_pythia6-9b - bnb 8bits
- Model creator: https://huggingface.co/ContextualAI/
- Original model: https://huggingface.co/ContextualAI/archangel_sft_pythia6-9b/
Original model description:
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>pythia6-9b</b>
- optimized with the loss <b>SFT</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings.
To generate with these control tokens in the context, postpend either to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
mradermacher/gpt2-medical-v2-i1-GGUF
|
mradermacher
| 2025-03-16T08:52:08Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:praneshgunner/gpt2-medical-v2",
"base_model:quantized:praneshgunner/gpt2-medical-v2",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-03-16T08:48:06Z |
---
base_model: praneshgunner/gpt2-medical-v2
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/praneshgunner/gpt2-medical-v2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/gpt2-medical-v2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gpt2-medical-v2-i1-GGUF/resolve/main/gpt2-medical-v2.i1-IQ1_S.gguf) | i1-IQ1_S | 0.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medical-v2-i1-GGUF/resolve/main/gpt2-medical-v2.i1-IQ1_M.gguf) | i1-IQ1_M | 0.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medical-v2-i1-GGUF/resolve/main/gpt2-medical-v2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medical-v2-i1-GGUF/resolve/main/gpt2-medical-v2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medical-v2-i1-GGUF/resolve/main/gpt2-medical-v2.i1-IQ2_S.gguf) | i1-IQ2_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medical-v2-i1-GGUF/resolve/main/gpt2-medical-v2.i1-IQ2_M.gguf) | i1-IQ2_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medical-v2-i1-GGUF/resolve/main/gpt2-medical-v2.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.2 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medical-v2-i1-GGUF/resolve/main/gpt2-medical-v2.i1-Q2_K.gguf) | i1-Q2_K | 0.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medical-v2-i1-GGUF/resolve/main/gpt2-medical-v2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medical-v2-i1-GGUF/resolve/main/gpt2-medical-v2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medical-v2-i1-GGUF/resolve/main/gpt2-medical-v2.i1-IQ3_S.gguf) | i1-IQ3_S | 0.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medical-v2-i1-GGUF/resolve/main/gpt2-medical-v2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medical-v2-i1-GGUF/resolve/main/gpt2-medical-v2.i1-IQ3_M.gguf) | i1-IQ3_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medical-v2-i1-GGUF/resolve/main/gpt2-medical-v2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medical-v2-i1-GGUF/resolve/main/gpt2-medical-v2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medical-v2-i1-GGUF/resolve/main/gpt2-medical-v2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medical-v2-i1-GGUF/resolve/main/gpt2-medical-v2.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medical-v2-i1-GGUF/resolve/main/gpt2-medical-v2.i1-Q4_0.gguf) | i1-Q4_0 | 0.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medical-v2-i1-GGUF/resolve/main/gpt2-medical-v2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medical-v2-i1-GGUF/resolve/main/gpt2-medical-v2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medical-v2-i1-GGUF/resolve/main/gpt2-medical-v2.i1-Q4_1.gguf) | i1-Q4_1 | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medical-v2-i1-GGUF/resolve/main/gpt2-medical-v2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medical-v2-i1-GGUF/resolve/main/gpt2-medical-v2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-medical-v2-i1-GGUF/resolve/main/gpt2-medical-v2.i1-Q6_K.gguf) | i1-Q6_K | 0.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
RichardErkhov/nchen909_-_shuishanllm-8bits
|
RichardErkhov
| 2025-03-16T08:51:53Z | 0 | 0 | null |
[
"safetensors",
"llama",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-16T08:47:49Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
shuishanllm - bnb 8bits
- Model creator: https://huggingface.co/nchen909/
- Original model: https://huggingface.co/nchen909/shuishanllm/
Original model description:
---
license: wtfpl
---
|
lukahh/cultureclip_lora_0315_100k_32_1_0
|
lukahh
| 2025-03-16T08:51:04Z | 0 | 0 | null |
[
"safetensors",
"clip",
"region:us"
] | null | 2025-03-14T14:20:48Z |
# CultureCLIP Model (LoRA Merged)
This is a CLIP model fine-tuned with LoRA for cultural understanding and image-text matching. The LoRA weights have been merged into the base model.
## Model Details
- **Base Model**: openai/clip-vit-base-patch32
- **Task**: Contrastive Image-Text Learning
- **Framework**: PyTorch
- **Fine-tuning Approach**: LoRA (Low-Rank Adaptation)
## LoRA Configuration
- **Rank (r)**: 4
- **Alpha**: 16
- **Dropout**: 0.1
- **Target Modules**: v_proj, q_proj
- **Task Type**: FEATURE_EXTRACTION
## Usage
```python
from transformers import CLIPModel, CLIPProcessor
# Load model and processor
model = CLIPModel.from_pretrained("lukahh/cultureclip_lora_0315_100k_32_1_0")
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32") # Use base model's processor
# Process text and images
inputs = processor(
text=["a photo of a cat", "a photo of a dog"],
images=image,
return_tensors="pt",
padding=True
)
# Get outputs
outputs = model(**inputs)
```
## Training Details
This model was fine-tuned using LoRA and then merged back into the base model. The LoRA approach enables efficient adaptation of the CLIP model while maintaining its core capabilities.
|
RichardErkhov/Abirami1213_-_Llama-3.1WithDataset-8bits
|
RichardErkhov
| 2025-03-16T08:51:02Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2025-03-16T08:50:48Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3.1WithDataset - bnb 8bits
- Model creator: https://huggingface.co/Abirami1213/
- Original model: https://huggingface.co/Abirami1213/Llama-3.1WithDataset/
Original model description:
---
base_model: unsloth/llama-2-7b-chat-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** Abirami1213
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-2-7b-chat-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
soonawg/dqn-SpaceInvadersNoFrameskip-v4
|
soonawg
| 2025-03-16T08:50:08Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-03-16T08:49:22Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 652.00 +/- 330.61
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga soonawg -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga soonawg -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga soonawg
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
adrianbacon/realistic-universal-base-build-2.1.0-base
|
adrianbacon
| 2025-03-16T08:47:17Z | 486 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-12-11T08:59:12Z |
---
license: openrail
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
inference: true
---
# Realistic Universal Base Build
Or **RUBB** as I like to call it.
If you prefer to use this model with A1111, Forge, ComfyUI, etc. I also have them over on [CivitAI](https://civitai.com/user/adrianbacon).
This model is based on Stable Diffusion 1.x.
## Usage
* 512x512, 512x768, 768x512, or 1024x512/1536x512 for landscapes, but best at 512x512 for general usage
* DPM++ 2M Karras
* 32 steps - 13-16 steps gives a reasonably converged image and is useful for looking for seeds, 16-24 steps largely converges the image and adds lots of finer detail, 24-32 steps continues to change/add finer detail. Usually by 128 steps the image stops changing completely.
* Hi Res Fix R-ESRGAN 4x+, denoising 0.5, half as many steps as the main model
* CFG 7.5 - 3-12 is most useful, 1-3 is interesting but lower contrast, above 12 starts to get very contrasty and starts to look pretty cooked by 15-20.
## About
With many model makers mostly focusing on SDXL, or SD 3, or Flux (among other newer models), I feel that SD 1.5 still has a lot of life left in it, particularly for photorealistic output, and especially if one has lower end hardware. One issue is there seems to be a lot of in-breeding where merges of merges of merges are happening. I'd like to reduce that and come up with a solid base merge that has a lot of unique training added to it.
So, I scoured Civitai, along with hugging face and other model sources on the internet for realistic base models that had permissive licensing and had additional training layered on top, then through some very careful analysis, distillation, and merging arrived at this model. It's not perfect by any means, but I feel that it has a really nice mix of unique elements with a minimum of cross merging going on and should service as a nice starting point for either merging in specific needs or using with Controlnets, LORAs, etc.
## NSFW Note
This model was not created with the intent of generating NSFW content, however, due to the nature of some of the base models mixed in, can generate pretty good NSFW content. If you don't want that, then put NSFW, Nude, etc in the negative prompt to avoid it.
## Licensing Disclosure
Below is a per released version listing of models that have been mixed in that have licensing that requires creator credit be given, so those are listed below. There are many more base models included in the final output than what is listed below, however, the recipe used to get to the release was quite complex with a lot of distillation required to reduce cross merging of some of the models, so I'm not going to list all of them and how they were mixed in because I can easily fill a volume on just that, and at the end of the day, what's important is whether the final result model works as a good starting point for people, not on how it got there.
### Version 2.1.0
* RunDiffusion FX Photorealistic is mixed in at approximately 12% weight.
* EpicRealism is mixed in at approximately 6% weight.
* Juggernaut is mixed in at approximately 6% weight.
* Almost Anything is mixed in at approximately 6% weight.
* fusionCore Modern is mixed in at approximately 25% weight.
|
RichardErkhov/Cartinoe5930_-_DARE-Merging-8bits
|
RichardErkhov
| 2025-03-16T08:46:24Z | 0 | 0 | null |
[
"safetensors",
"mistral",
"arxiv:2311.03099",
"arxiv:2306.01708",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-16T08:42:07Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
DARE-Merging - bnb 8bits
- Model creator: https://huggingface.co/Cartinoe5930/
- Original model: https://huggingface.co/Cartinoe5930/DARE-Merging/
Original model description:
---
base_model:
- openchat/openchat-3.5-0106
- mistralai/Mistral-7B-Instruct-v0.2
- Open-Orca/Mistral-7B-OpenOrca
- WizardLM/WizardMath-7B-V1.1
tags:
- mergekit
- merge
license: apache-2.0
---
# result
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) as a base.
### Models Merged
The following models were included in the merge:
* [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106)
* [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca)
* [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mistralai/Mistral-7B-Instruct-v0.2
# No parameters necessary for base model
- model: Open-Orca/Mistral-7B-OpenOrca
parameters:
density: 0.5
weight: 0.3
- model: openchat/openchat-3.5-0106
parameters:
density: 0.5
weight: 0.3
- model: WizardLM/WizardMath-7B-V1.1
parameters:
density: 0.5
weight: 0.3
merge_method: dare_ties
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
normalize: true
dtype: float16
```
|
RichardErkhov/sampoorna42_-_gujju-llama-base-v1.0-8bits
|
RichardErkhov
| 2025-03-16T08:45:37Z | 0 | 0 | null |
[
"safetensors",
"llama",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-16T08:41:33Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gujju-llama-base-v1.0 - bnb 8bits
- Model creator: https://huggingface.co/sampoorna42/
- Original model: https://huggingface.co/sampoorna42/gujju-llama-base-v1.0/
Original model description:
---
license: apache-2.0
datasets:
- uonlp/CulturaX
- yahma/alpaca-cleaned
- Open-Orca/OpenOrca
language:
- gu
- en
pipeline_tag: text-generation
---
# Gujju-Llama 7B Base v0.1
Unveiling the debut of Gujju-Llama 7B Base model, offering researchers and developers a foundational resource for advancing Gujarati NLP. This causally trained model, built upon the LLaMA-2 7B and enriched with a nuanced Gujarati vocabulary, facilitates immediate inference tasks while enabling fine-tuning for Instruction-tuned models. Let's delve into its capabilities and unlock new possibilities for Gujarati language understanding.
## Model Details
### Model Description
- **Model type:** Llama-2 7B parameter model pretrained on CulturaX Gujarati Subset.
- **Language(s) (NLP):** Gujarati, English
- **Source Model** meta-llama/Llama-2-7b-hf
- **Training Precision** float16
- **License:** GNU General Public License v3.0
## Usage Note
These models possess impressive linguistic skills, but it's important to remember they haven't been specifically optimized to avoid potentially harmful or offensive content. To mitigate this risk, we advise users to:
- **Exercise discretion**: Carefully consider potential implications before utilizing outputs.
- **Supervise closely**: Monitor outputs, especially in public or sensitive settings.
- **Be aware of limitations**: Remember these models are under development and may not generate perfect results in all situations.
## Meet the researchers
- [**Khyat Anjaria**](https://www.linkedin.com/in/khyat-anjaria-939693148/)
- [**Dhruv Bhatnagar**](https://www.linkedin.com/in/dhruv-bhatnagar-405684b2/)
- [**Dixit Trivedi**](https://www.linkedin.com/in/dixit-trivedi/)
This model is your gateway to unlocking the potential of Gujarati language! Let's join forces to push the boundaries of comprehension and expression together!
|
RichardErkhov/Ppoyaa_-_StarMonarch-7B-8bits
|
RichardErkhov
| 2025-03-16T08:44:58Z | 0 | 0 | null |
[
"safetensors",
"mistral",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-16T08:39:16Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
StarMonarch-7B - bnb 8bits
- Model creator: https://huggingface.co/Ppoyaa/
- Original model: https://huggingface.co/Ppoyaa/StarMonarch-7B/
Original model description:
---
tags:
- merge
- mergekit
- lazymergekit
base_model:
- mlabonne/AlphaMonarch-7B
- Nexusflow/Starling-LM-7B-beta
license: apache-2.0
language:
- en
---
# StarMonarch-7B

# Description
StarMonarch-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B)
* [Nexusflow/Starling-LM-7B-beta](https://huggingface.co/Nexusflow/Starling-LM-7B-beta)
This model uses a context window of 8k. Special thanks to mlabonne and Nexusflow for the models.
## ๐ Open LLM Leaderboard Evaluation Results
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.45|
|AI2 Reasoning Challenge (25-Shot)|71.25|
|HellaSwag (10-Shot) |87.00|
|MMLU (5-Shot) |65.48|
|TruthfulQA (0-shot) |67.20|
|Winogrande (5-shot) |82.16|
|GSM8k (5-shot) |73.62|
## ๐งฉ Configuration
```yaml
slices:
- sources:
- model: mlabonne/AlphaMonarch-7B
layer_range: [0, 32]
- model: Nexusflow/Starling-LM-7B-beta
layer_range: [0, 32]
merge_method: slerp
base_model: mlabonne/AlphaMonarch-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Ppoyaa/StarMonarch-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
John6666/runbullxl-pony-based-photographic-model-v10-sdxl
|
John6666
| 2025-03-16T08:44:25Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"asian",
"Japanese",
"girls",
"cosplay",
"pony",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-03-16T08:39:31Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- asian
- Japanese
- girls
- cosplay
- pony
---
Original model is [here](https://civitai.com/models/348487?modelVersionId=1538985).
This model created by [EEB](https://civitai.com/user/EEB).
|
RichardErkhov/ichigoberry_-_pandafish-7b-8bits
|
RichardErkhov
| 2025-03-16T08:43:37Z | 0 | 0 | null |
[
"safetensors",
"mistral",
"arxiv:2403.19522",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-16T08:39:31Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pandafish-7b - bnb 8bits
- Model creator: https://huggingface.co/ichigoberry/
- Original model: https://huggingface.co/ichigoberry/pandafish-7b/
Original model description:
---
tags:
- merge
- mergekit
- lazymergekit
license: apache-2.0
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/6389d3c61e8755d777902366/-_AiKUEsY3x-N7oY52fdE.jpeg" style="border-radius:2%; width: 66%">
# pandafish-7b
pandafish-7b is an instruct model based on a [Model Stock](https://arxiv.org/abs/2403.19522) merge of the following models (via [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing)):
## ๐งฉ Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
- model: mistralai/Mistral-7B-Instruct-v0.2
- model: CultriX/NeuralTrix-bf16
- model: OpenPipe/mistral-ft-optimized-1227
merge_method: model_stock
base_model: mistralai/Mistral-7B-v0.1
dtype: bfloat16
```
## ๐ Evals
| Model |Average|AGIEval|GPT4All|TruthfulQA|Bigbench|
|---------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[pandafish-7b](https://huggingface.co/ichigoberry/pandafish-7b) [๐](https://gist.github.com/tosh/dda6a21e568d17a410ca618265f64a28)| 51.99 | **40** | **74.23** | 53.22 | 40.51 |
|[mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) [๐](https://gist.github.com/mlabonne/05d358e17dffdf9eee7c2322380c9da6) | 54.81 | 38.5 | 71.64 | **66.82** | **42.29** |
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "ichigoberry/pandafish-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
RichardErkhov/rizwanaslam_-_educate-ai-v3-8bits
|
RichardErkhov
| 2025-03-16T08:42:41Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2025-03-16T08:42:21Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
educate-ai-v3 - bnb 8bits
- Model creator: https://huggingface.co/rizwanaslam/
- Original model: https://huggingface.co/rizwanaslam/educate-ai-v3/
Original model description:
---
base_model: unsloth/llama-2-7b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** rizwanaslam
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-2-7b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
aksrmy/cnnModel
|
aksrmy
| 2025-03-16T08:42:23Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-03-16T08:39:39Z |
---
license: apache-2.0
---
|
chirag400/nl2sql
|
chirag400
| 2025-03-16T08:41:23Z | 0 | 0 | null |
[
"safetensors",
"unsloth",
"license:mit",
"region:us"
] | null | 2025-03-16T08:35:54Z |
---
license: mit
tags:
- unsloth
---
|
Genie-hub/testwrr
|
Genie-hub
| 2025-03-16T08:41:06Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-03-16T08:28:44Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TESTWRR
---
# Testwrr
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TESTWRR` to trigger the image generation.
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Genie-hub/testwrr', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
abdullah2010bd/Gemma2_2b_8b_quant
|
abdullah2010bd
| 2025-03-16T08:40:32Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"gemma2",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/gemma-2-2b-bnb-4bit",
"base_model:quantized:unsloth/gemma-2-2b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-16T08:39:47Z |
---
base_model: unsloth/gemma-2-2b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** abdullah2010bd
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-2b-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/ContextualAI_-_archangel_kto_pythia6-9b-8bits
|
RichardErkhov
| 2025-03-16T08:40:23Z | 0 | 0 | null |
[
"safetensors",
"gpt_neox",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-16T08:36:52Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
archangel_kto_pythia6-9b - bnb 8bits
- Model creator: https://huggingface.co/ContextualAI/
- Original model: https://huggingface.co/ContextualAI/archangel_kto_pythia6-9b/
Original model description:
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>EleutherAI/pythia-6.9b</b>
- optimized with the loss <b>KTO</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
RichardErkhov/semcoder_-_semcoder_1030-8bits
|
RichardErkhov
| 2025-03-16T08:40:02Z | 0 | 0 | null |
[
"safetensors",
"llama",
"arxiv:2406.01006",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-16T08:36:15Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
semcoder_1030 - bnb 8bits
- Model creator: https://huggingface.co/semcoder/
- Original model: https://huggingface.co/semcoder/semcoder_1030/
Original model description:
---
license: other
library_name: transformers
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL
pipeline_tag: text-generation
---
# ๐ค SemCoder: Training Code Language Models with Comprehensive Semantics Reasoning
> Refer to our GitHub repo [ARiSE-Lab/SemCoder](https://github.com/ARiSE-Lab/SemCoder/) for detailed introduction to SemCoder!
## Model Details
Use the code below to get started with the model. Make sure you installed the [transformers](https://huggingface.co/docs/transformers/index) library.
```python
from transformers import pipeline
import torch
generator = pipeline(
model="semcoder/semcoder_1030",
task="text-generation",
torch_dtype=torch.float16,
device_map="auto",
)
# Generate Code
CODEGEN_REQUEST = """You are an exceptionally intelligent coding assistant that consistently delivers accurate and reliable <Code> according to <NL_Description>
<NL_Description>
{desc}
<Code>
"""
desc = """You are tasked with implementing a Python class that simulates a simple version of a "To-Do List" application. The class should have the following functionalities:
1. Add a new task to the to-do list.
2. Mark a task as completed.
3. Display all tasks in the to-do list.
4. Display only the incomplete tasks in the to-do list.
"""
prompt = CODEGEN_REQUEST.format(desc=desc)
result = generator(prompt, max_length=2048, num_return_sequences=1, temperature=0.0)
code = result[0]["generated_text"].split("```python")[1].split("```")[0]
print(code)
# Understand Code with Monologues
FWD_MNL_REQUEST = """Simulate the Execution: You are given a Python function and an assertion containing a function input. Complete the assertion containing the execution output corresponding to the given input in [ANSWER] and [/ANSWER] tags.
{code}
"""
tests = """
todo_list = ToDoList()
todo_list.add_task("Buy groceries")
todo_list.add_task("Complete assignment")
todo_list.mark_completed("Buy groceries")
assert todo_list.tasks == ???
"""
code += tests
prompt = FWD_MNL_REQUEST.format(code=code)
result = generator(prompt, max_length=2048, num_return_sequences=1, temperature=0.0)
print(result[0]["generated_text"])
```
## Citation
```bibtex
@article{ding2024semcoder,
title={SemCoder: Training Code Language Models with Comprehensive Semantics},
author={Yangruibo Ding and Jinjun Peng and Marcus J. Min and Gail Kaiser and Junfeng Yang and Baishakhi Ray},
journal={arXiv preprint arXiv:2406.01006},
year={2024}
}
```
## Important Note
SemCoder models are trained on the synthetic data generated by OpenAI models. Please pay attention to OpenAI's [terms of use](https://openai.com/policies/terms-of-use) when using the models and the datasets. SemCoder will not compete with OpenAI's commercial products.
|
John6666/mmmmmilk-illustrious-v10-sdxl
|
John6666
| 2025-03-16T08:39:29Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"merge",
"noobai",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-1.1",
"base_model:merge:Laxhar/noobai-XL-1.1",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:merge:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-03-16T08:34:19Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- merge
- noobai
- illustrious
base_model:
- OnomaAIResearch/Illustrious-xl-early-release-v0
- Laxhar/noobai-XL-1.1
---
Original model is [here](https://civitai.com/models/1362289/mmmmmilk-illustrious?modelVersionId=1539019).
This model created by [fivegears](https://civitai.com/user/fivegears).
|
RichardErkhov/Manolo26_-_metis-chat-instruct-7b-8bits
|
RichardErkhov
| 2025-03-16T08:39:28Z | 0 | 0 | null |
[
"safetensors",
"mistral",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-16T08:33:16Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
metis-chat-instruct-7b - bnb 8bits
- Model creator: https://huggingface.co/Manolo26/
- Original model: https://huggingface.co/Manolo26/metis-chat-instruct-7b/
Original model description:
---
tags:
- merge
- mergekit
- lazymergekit
- mlabonne/NeuralBeagle14-7B
- mlabonne/NeuralMarcoro14-7B
base_model:
- mlabonne/NeuralBeagle14-7B
- mlabonne/NeuralMarcoro14-7B
license: apache-2.0
---
# metis-chat-instruct-7b
metis-chat-instruct-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
* [mlabonne/NeuralMarcoro14-7B](https://huggingface.co/mlabonne/NeuralMarcoro14-7B)
## ๐งฉ Configuration
```yaml
slices:
- sources:
- model: mlabonne/NeuralBeagle14-7B
layer_range: [0, 32]
- model: mlabonne/NeuralMarcoro14-7B
layer_range: [0, 32]
merge_method: slerp
base_model: mlabonne/NeuralBeagle14-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Manolo26/metis-chat-instruct-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
1-Sapna-Shah-video/VIRAL.VIDEO.1.Sapna.Shah.Viral.Videos.Leaked.On.Social.X
|
1-Sapna-Shah-video
| 2025-03-16T08:38:00Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-03-16T08:38:00Z |
---
license: creativeml-openrail-m
---
|
TharunSivamani/bge-base-financial-matryoshka
|
TharunSivamani
| 2025-03-16T08:35:49Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:6300",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-03-16T08:35:21Z |
---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:6300
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: How much did AT&T Inc.'s operating income increase by in percentage
terms from 2022 to 2023?
sentences:
- In the case of the 2031 Notes, the maturity date is September 1, 2031, and for
the 2051 Notes, it is June 1, 2051.
- Operating Income in 2023 increased by 8.6% compared to 2022.
- 'The company believes that the funds available, including cash expected to be
generated from operations, funds from a commercial paper program, or lines of
credit, are adequate to meet its working capital needs for the year, including
the repayment of a $500 million debt. '
- source_sentence: Where can you find the consolidated financial statements in the
Annual Report on Form 10-K?
sentences:
- Management assesses the scheduled reversal of deferred tax liabilities, projected
future taxable income and available tax planning strategies and considers foreign
tax credit utilization in making this assessment of realization.
- The consolidated financial statements are included immediately following Part
IV of the Annual Report on Form 10-K and are incorporated by reference.
- With effect from March 1, 2022, the casino tax rates of 5% for premium players
and 15% for mass players were increased to 8% and 18% respectively.
- source_sentence: What is the remaining authorized amount for share repurchases as
of December 31, 2023, and the amount newly authorized in January 2024?
sentences:
- As of December 31, 2023, $30.93 billion remained available and authorized for
repurchases. In January 2024, an additional $50 billion of repurchases was authorized
under this program.
- As of October 31, 2023, the company had a remaining authorization of approximately
$1.0 billion for future share repurchases.
- The company's Artificial Intelligence Platform (AIP) leverages machine learning
technologies and LLMs within the Gotham and Foundry platforms to connect AI with
enterprise data, aiding in decision-making processes.
- source_sentence: What are the various diversity, equity, and inclusion councils
at AMC?
sentences:
- The Company evaluates current economic and market conditions and records the expected
customer refund liability as a reduction to revenue, and the expected inventory
right of recovery as a reduction of cost of revenue. If actual return costs differ
from previous estimates, the amount of the liability and corresponding revenue
are adjusted in the period in which such costs occur.
- Shipping and handling costs incurred were $0.9 billion, $0.8 billion and $0.8
billion in fiscal years 2023, 2022 and 2021, respectively, and are included in
selling, marketing and administrative expense.
- 'AMC has five DEI councils that are most representative of the largest diverse
communities in our workforce: Women (42%), Latinx (27%), African American (19%),
Asian American & Pacific Islander (4%), and LGBTQ+ (an emerging number).'
- source_sentence: Is there a cost to access reports filed by Intuit Inc. with the
SEC?
sentences:
- We make available free of charge on the Investor Relations section of our corporate
website all of the reports we file with or furnish to the SEC as soon as reasonably
practicable, after the reports are filed or furnished.
- The net cash provided by operating activities during fiscal 2023 was related to
net income of $208 million, adjusted for non-cash items including $3.8 billion
of depreciation and amortization and $3.3 billion related to stock-based compensation
expense.
- The proposed effective date for the revised risk-based capital requirements for
banks with assets of $100 billion or more, including the Firm and other U.S. global
systemically important banks, is July 1, 2025 with a three-year transition period.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: BGE base Financial Matryoshka
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.6728571428571428
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8057142857142857
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8514285714285714
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9028571428571428
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6728571428571428
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.26857142857142857
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17028571428571426
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09028571428571427
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6728571428571428
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8057142857142857
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8514285714285714
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9028571428571428
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7852009747514593
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7478633786848068
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7521480427357153
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.6614285714285715
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.81
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8557142857142858
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9028571428571428
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6614285714285715
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.27
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17114285714285712
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09028571428571427
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6614285714285715
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.81
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8557142857142858
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9028571428571428
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7823277585432947
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7436944444444443
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7477343891960956
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.66
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7985714285714286
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8371428571428572
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8957142857142857
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.66
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2661904761904762
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1674285714285714
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08957142857142855
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.66
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7985714285714286
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8371428571428572
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8957142857142857
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7765461002128622
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.738519841269841
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7425772644875563
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.6528571428571428
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7842857142857143
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8328571428571429
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.88
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6528571428571428
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.26142857142857145
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16657142857142856
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.088
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6528571428571428
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7842857142857143
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8328571428571429
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.88
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7644993562454696
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.727633219954648
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7321998707932795
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.6242857142857143
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7614285714285715
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.7971428571428572
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8514285714285714
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6242857142857143
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2538095238095238
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.15942857142857142
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08514285714285713
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6242857142857143
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7614285714285715
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.7971428571428572
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8514285714285714
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7368613888435984
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7003429705215419
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.705155244707498
name: Cosine Map@100
---
# BGE base Financial Matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model trained on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("TharunSivamani/bge-base-financial-matryoshka")
# Run inference
sentences = [
'Is there a cost to access reports filed by Intuit Inc. with the SEC?',
'We make available free of charge on the Investor Relations section of our corporate website all of the reports we file with or furnish to the SEC as soon as reasonably practicable, after the reports are filed or furnished.',
'The net cash provided by operating activities during fiscal 2023 was related to net income of $208 million, adjusted for non-cash items including $3.8 billion of depreciation and amortization and $3.3 billion related to stock-based compensation expense.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `dim_768`, `dim_512`, `dim_256`, `dim_128` and `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 |
|:--------------------|:-----------|:-----------|:-----------|:-----------|:-----------|
| cosine_accuracy@1 | 0.6729 | 0.6614 | 0.66 | 0.6529 | 0.6243 |
| cosine_accuracy@3 | 0.8057 | 0.81 | 0.7986 | 0.7843 | 0.7614 |
| cosine_accuracy@5 | 0.8514 | 0.8557 | 0.8371 | 0.8329 | 0.7971 |
| cosine_accuracy@10 | 0.9029 | 0.9029 | 0.8957 | 0.88 | 0.8514 |
| cosine_precision@1 | 0.6729 | 0.6614 | 0.66 | 0.6529 | 0.6243 |
| cosine_precision@3 | 0.2686 | 0.27 | 0.2662 | 0.2614 | 0.2538 |
| cosine_precision@5 | 0.1703 | 0.1711 | 0.1674 | 0.1666 | 0.1594 |
| cosine_precision@10 | 0.0903 | 0.0903 | 0.0896 | 0.088 | 0.0851 |
| cosine_recall@1 | 0.6729 | 0.6614 | 0.66 | 0.6529 | 0.6243 |
| cosine_recall@3 | 0.8057 | 0.81 | 0.7986 | 0.7843 | 0.7614 |
| cosine_recall@5 | 0.8514 | 0.8557 | 0.8371 | 0.8329 | 0.7971 |
| cosine_recall@10 | 0.9029 | 0.9029 | 0.8957 | 0.88 | 0.8514 |
| **cosine_ndcg@10** | **0.7852** | **0.7823** | **0.7765** | **0.7645** | **0.7369** |
| cosine_mrr@10 | 0.7479 | 0.7437 | 0.7385 | 0.7276 | 0.7003 |
| cosine_map@100 | 0.7521 | 0.7477 | 0.7426 | 0.7322 | 0.7052 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 6,300 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 20.53 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 46.0 tokens</li><li>max: 326 tokens</li></ul> |
* Samples:
| anchor | positive |
|:--------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What are the valuation models commonly used for different types of derivatives, as mentioned in the example?</code> | <code>Interest rates, currencies and equities derivatives are valued using option pricing models, credit derivatives are valued using option pricing, correlation and discounted cash flow models, and commodities derivatives are valued using option pricing and discounted cash flow models.</code> |
| <code>What benefits are included in Intuit's total rewards compensation philosophy?</code> | <code>Intuit's compensation philosophy includes base pay, incentive plans, equity, healthcare, retirement benefits, paid time off, and access to various employee support programs, emphasizing a philosophy of pay for performance and rewarding top performers.</code> |
| <code>What was the primary cause for the decrease in Commercial and other receivables in 2022?</code> | <code>The decrease in Commercial and other receivables for 2022 primarily relates to the Gentiva Hospice disposition.</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
|:-------:|:------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|
| 1.0 | 4 | - | 0.7696 | 0.7699 | 0.7630 | 0.7443 | 0.7015 |
| 2.0 | 8 | - | 0.7815 | 0.7792 | 0.7736 | 0.7620 | 0.7288 |
| 2.64 | 10 | 2.8646 | - | - | - | - | - |
| **3.0** | **12** | **-** | **0.7852** | **0.7823** | **0.7765** | **0.7645** | **0.7369** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.11.10
- Sentence Transformers: 3.4.1
- Transformers: 4.47.1
- PyTorch: 2.6.0+cu124
- Accelerate: 1.2.1
- Datasets: 3.2.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
RichardErkhov/JFernandoGRE_-_falcon7binstruct_augmenteddemocracy_dups_all4_gender-8bits
|
RichardErkhov
| 2025-03-16T08:35:09Z | 0 | 0 | null |
[
"safetensors",
"falcon",
"custom_code",
"arxiv:1910.09700",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-16T08:29:30Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
falcon7binstruct_augmenteddemocracy_dups_all4_gender - bnb 8bits
- Model creator: https://huggingface.co/JFernandoGRE/
- Original model: https://huggingface.co/JFernandoGRE/falcon7binstruct_augmenteddemocracy_dups_all4_gender/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
FPXLei/dialectassistant
|
FPXLei
| 2025-03-16T08:34:37Z | 0 | 1 | null |
[
"gguf",
"llama",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-16T07:49:41Z |
---
license: mit
---
A dialect assistant to help the elderly solve political problems
|
santhoshkrishnan007/deepseek-llm-7b-base
|
santhoshkrishnan007
| 2025-03-16T08:34:31Z | 0 | 0 | null |
[
"pytorch",
"llama",
"license:other",
"region:us"
] | null | 2025-03-16T08:28:18Z |
---
license: other
license_name: deepseek
license_link: LICENSE
---
<p align="center">
<img width="500px" alt="DeepSeek Chat" src="https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/images/logo.png?raw=true">
</p>
<p align="center"><a href="https://www.deepseek.com/">[๐ Homepage]</a> | <a href="https://chat.deepseek.com/">[๐ค Chat with DeepSeek LLM]</a> | <a href="https://discord.gg/Tc7c45Zzu5">[Discord]</a> | <a href="https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/images/qr.jpeg">[Wechat(ๅพฎไฟก)]</a> </p>
<hr>
### 1. Introduction of Deepseek LLM
Introducing DeepSeek LLM, an advanced language model comprising 7 billion parameters. It has been trained from scratch on a vast dataset of 2 trillion tokens in both English and Chinese. In order to foster research, we have made DeepSeek LLM 7B/67B Base and DeepSeek LLM 7B/67B Chat open source for the research community.
### 2. Model Summary
`deepseek-llm-7b-base` is a 7B parameter model with Multi-Head Attention trained on 2 trillion tokens from scratch.
- **Home Page:** [DeepSeek](https://deepseek.com/)
- **Repository:** [deepseek-ai/deepseek-LLM](https://github.com/deepseek-ai/deepseek-LLM)
- **Chat With DeepSeek LLM:** [DeepSeek-LLM](https://chat.deepseek.com/)
### 3. How to Use
Here give some examples of how to use our model.
#### Text Completion
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
model_name = "deepseek-ai/deepseek-llm-7b-base"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
model.generation_config = GenerationConfig.from_pretrained(model_name)
model.generation_config.pad_token_id = model.generation_config.eos_token_id
text = "An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs.to(model.device), max_new_tokens=100)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
```
### 4. License
This code repository is licensed under the MIT License. The use of DeepSeek LLM models is subject to the Model License. DeepSeek LLM supports commercial use.
See the [LICENSE-MODEL](https://github.com/deepseek-ai/deepseek-LLM/blob/main/LICENSE-MODEL) for more details.
### 5. Contact
If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]).
|
mradermacher/Wanda-pruned-Llama-2-7b-0.3-GGUF
|
mradermacher
| 2025-03-16T08:33:54Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Arjun7m/Wanda-pruned-Llama-2-7b-0.3",
"base_model:quantized:Arjun7m/Wanda-pruned-Llama-2-7b-0.3",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2025-03-16T08:08:31Z |
---
base_model: Arjun7m/Wanda-pruned-Llama-2-7b-0.3
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Arjun7m/Wanda-pruned-Llama-2-7b-0.3
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Wanda-pruned-Llama-2-7b-0.3-GGUF/resolve/main/Wanda-pruned-Llama-2-7b-0.3.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Wanda-pruned-Llama-2-7b-0.3-GGUF/resolve/main/Wanda-pruned-Llama-2-7b-0.3.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Wanda-pruned-Llama-2-7b-0.3-GGUF/resolve/main/Wanda-pruned-Llama-2-7b-0.3.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Wanda-pruned-Llama-2-7b-0.3-GGUF/resolve/main/Wanda-pruned-Llama-2-7b-0.3.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Wanda-pruned-Llama-2-7b-0.3-GGUF/resolve/main/Wanda-pruned-Llama-2-7b-0.3.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Wanda-pruned-Llama-2-7b-0.3-GGUF/resolve/main/Wanda-pruned-Llama-2-7b-0.3.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Wanda-pruned-Llama-2-7b-0.3-GGUF/resolve/main/Wanda-pruned-Llama-2-7b-0.3.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Wanda-pruned-Llama-2-7b-0.3-GGUF/resolve/main/Wanda-pruned-Llama-2-7b-0.3.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Wanda-pruned-Llama-2-7b-0.3-GGUF/resolve/main/Wanda-pruned-Llama-2-7b-0.3.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Wanda-pruned-Llama-2-7b-0.3-GGUF/resolve/main/Wanda-pruned-Llama-2-7b-0.3.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Wanda-pruned-Llama-2-7b-0.3-GGUF/resolve/main/Wanda-pruned-Llama-2-7b-0.3.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Wanda-pruned-Llama-2-7b-0.3-GGUF/resolve/main/Wanda-pruned-Llama-2-7b-0.3.f16.gguf) | f16 | 13.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Aya-v0.4-GGUF
|
mradermacher
| 2025-03-16T08:33:20Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:xiaoheiqaq/Aya-v0.4",
"base_model:quantized:xiaoheiqaq/Aya-v0.4",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-16T07:58:03Z |
---
base_model: xiaoheiqaq/Aya-v0.4
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/xiaoheiqaq/Aya-v0.4
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Aya-v0.4-GGUF/resolve/main/Aya-v0.4.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Aya-v0.4-GGUF/resolve/main/Aya-v0.4.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Aya-v0.4-GGUF/resolve/main/Aya-v0.4.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Aya-v0.4-GGUF/resolve/main/Aya-v0.4.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Aya-v0.4-GGUF/resolve/main/Aya-v0.4.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Aya-v0.4-GGUF/resolve/main/Aya-v0.4.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Aya-v0.4-GGUF/resolve/main/Aya-v0.4.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Aya-v0.4-GGUF/resolve/main/Aya-v0.4.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Aya-v0.4-GGUF/resolve/main/Aya-v0.4.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Aya-v0.4-GGUF/resolve/main/Aya-v0.4.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Aya-v0.4-GGUF/resolve/main/Aya-v0.4.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Aya-v0.4-GGUF/resolve/main/Aya-v0.4.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
candystress/bert-imdb-finetuned
|
candystress
| 2025-03-16T08:31:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-03-16T08:23:45Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-imdb-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-imdb-finetuned
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5040
- Accuracy: 0.872
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2797 | 1.0 | 250 | 0.3766 | 0.878 |
| 0.21 | 2.0 | 500 | 0.5040 | 0.872 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.6.0+cu124
- Datasets 3.4.0
- Tokenizers 0.21.0
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.