modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-02 18:27:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 549
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-02 18:24:50
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Baltish/Mmj
|
Baltish
| 2025-02-26T05:50:15Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-02-26T05:50:15Z |
---
license: apache-2.0
---
|
rowankwang/Llama-3.3-70B-Instruct-Reference-cubic_gravity-7449a50e
|
rowankwang
| 2025-02-26T05:48:04Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-02-26T05:43:11Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0
|
leixa/e709cd85-00b0-400c-b2ee-06c5b8d99945
|
leixa
| 2025-02-26T05:47:34Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:The-matt/llama2_ko-7b_distinctive-snowflake-182_1060",
"base_model:adapter:The-matt/llama2_ko-7b_distinctive-snowflake-182_1060",
"region:us"
] | null | 2025-02-26T03:45:21Z |
---
library_name: peft
base_model: The-matt/llama2_ko-7b_distinctive-snowflake-182_1060
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e709cd85-00b0-400c-b2ee-06c5b8d99945
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: The-matt/llama2_ko-7b_distinctive-snowflake-182_1060
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5bb0834f57e78dbd_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5bb0834f57e78dbd_train_data.json
type:
field_input: question
field_instruction: prompt
field_output: chosen
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
ddp_timeout: 1800
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 150
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: true
group_by_length: true
hub_model_id: leixa/e709cd85-00b0-400c-b2ee-06c5b8d99945
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 10
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: constant
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 1800
micro_batch_size: 4
mlflow_experiment_name: /tmp/5bb0834f57e78dbd_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optim_args:
adam_beta1: 0.9
adam_beta2: 0.999
adam_epsilon: 1e-08
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
relora_prune_ratio: 0.9
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 150
saves_per_epoch: null
sequence_len: 512
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: acopia-grant
wandb_mode: online
wandb_name: 0917dd10-b149-4f18-9779-8c5c61bcb6b7
wandb_project: Gradients-On-112
wandb_run: your_name
wandb_runid: 0917dd10-b149-4f18-9779-8c5c61bcb6b7
warmup_steps: 50
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# e709cd85-00b0-400c-b2ee-06c5b8d99945
This model is a fine-tuned version of [The-matt/llama2_ko-7b_distinctive-snowflake-182_1060](https://huggingface.co/The-matt/llama2_ko-7b_distinctive-snowflake-182_1060) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0107
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.999,adam_epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 50
- training_steps: 1800
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0004 | 1 | 1.3257 |
| 1.2304 | 0.0575 | 150 | 1.1362 |
| 1.1244 | 0.1150 | 300 | 1.1020 |
| 1.1327 | 0.1725 | 450 | 1.0850 |
| 0.987 | 0.2300 | 600 | 1.0718 |
| 0.9986 | 0.2874 | 750 | 1.0612 |
| 1.0472 | 0.3449 | 900 | 1.0432 |
| 0.9791 | 0.4024 | 1050 | 1.0411 |
| 1.026 | 0.4599 | 1200 | 1.0306 |
| 0.9771 | 0.5174 | 1350 | 1.0292 |
| 1.002 | 0.5749 | 1500 | 1.0176 |
| 0.9724 | 0.6324 | 1650 | 1.0152 |
| 0.9369 | 0.6899 | 1800 | 1.0107 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Ynmaster/Llama-3.2-1B-CV150
|
Ynmaster
| 2025-02-26T05:47:13Z | 0 | 0 | null |
[
"pytorch",
"safetensors",
"llama",
"unsloth",
"trl",
"sft",
"license:llama3.2",
"8-bit",
"region:us"
] | null | 2025-02-26T05:33:09Z |
---
license: llama3.2
tags:
- unsloth
- trl
- sft
---
|
DavidBaloches/Acorn_Vikki
|
DavidBaloches
| 2025-02-26T05:46:24Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-02-26T04:28:09Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
woman, sitting on the floor in her home, sunlight shining in from a nearby
window, busty, tight t-shirt, plunging neckline, short sleeves, fit midriff,
tight shorts, long hair, brunette hair, leaning on one hand, hand in her
hair, casting shadow style, extremely intricate details, masterpiece, epic,
clear shadows and highlights, realistic, intense, enhanced contrast, highly
detailed skin. <lora:VikkiA:0.6><lora:nipples:-1>
output:
url: images/05068-2035045684.png
- text: >-
shirome, white eyes, a face photograph of a female ghost, 19 years old,
skinny body, elegant face, her charisma is full of grace, volumetric
lighting, dramatic lighting, dark scene, she is a ghost in a dark dungeon.
<lora:VikkiA:0.7><lora:nipples:-1>
output:
url: images/05073-2525776150.png
- text: >-
Hyper realistic full body view of beautiful 18 year old nun, long legs, Long
Black Hair, very big breasts, Wide hips, delicate hands with long slender
fingers, young catholic nun, pretty, cute face, Short skirt, Black straps,
frontal view, ray of light breaking the shadows of a church, 8k wallpaper,
UHD, perfect lighting, masterpiece, dramatic, photorealistic, finely
detailed, shadows, large, Full Body Picture .
<lora:VikkiA:0.7><lora:nipples:-1>
output:
url: images/05070-1796232558.png
- text: >-
fantasy art, woman , young, messy short blonde hair, blue eyes, sitting on
sofa, crossed legs, sexy legs, long legs, seductive, full length long white
knit sweater that goes down to knees, bare shoulders , cleavage, sitting,
white wool thigh highs, slim white body, pale skin, curvy body, big breasts,
big boobs, big natural breasts, natural boobs, natural breasts, athletic
body, blush, slim waist, seductive image,serene, untouched beauty, clear,
lively, detailed face, upper body accent, white lips and makeup , blue white
eyes, seductive look, realistic,
output:
url: images/05092-936689648.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev
language:
- en
pipeline_tag: text-to-image
---
# Acorn Vikki
<Gallery />
## Model description
Lora model of a beautiful woman. Not meant to represent any real person.
https://civitai.com/user/Seeker70
## Download model
Weights for this model are available in Safetensors format.
[Download](/DavidBaloches/Acorn_Vikki/tree/main) them in the Files & versions tab.
|
PrunaAI/chatdb-natural-sql-7b-GGUF-smashed
|
PrunaAI
| 2025-02-26T05:45:41Z | 0 | 0 | null |
[
"gguf",
"pruna-ai",
"base_model:chatdb/natural-sql-7b",
"base_model:quantized:chatdb/natural-sql-7b",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-26T05:45:01Z |
---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: chatdb/natural-sql-7b
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.com/invite/vb6SmA3hxu)
## This repo contains GGUF versions of the chatdb/natural-sql-7b model.
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help.
**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with GGUF.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***What is the model format?*** We use GGUF format.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
# Downloading and running the models
You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout [this chart](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [this guide](https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overview_of_gguf_quantization_methods/):
| Quant type | Description |
|------------|--------------------------------------------------------------------------------------------|
| Q5_K_M | High quality, recommended. |
| Q5_K_S | High quality, recommended. |
| Q4_K_M | Good quality, uses about 4.83 bits per weight, recommended. |
| Q4_K_S | Slightly lower quality with more space savings, recommended. |
| IQ4_NL | Decent quality, slightly smaller than Q4_K_S with similar performance, recommended. |
| IQ4_XS | Decent quality, smaller than Q4_K_S with similar performance, recommended. |
| Q3_K_L | Lower quality but usable, good for low RAM availability. |
| Q3_K_M | Even lower quality. |
| IQ3_M | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| IQ3_S | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| Q3_K_S | Low quality, not recommended. |
| IQ3_XS | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| Q2_K | Very low quality but surprisingly usable. |
## How to download GGUF files ?
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
- **Option A** - Downloading in `text-generation-webui`:
- **Step 1**: Under Download Model, you can enter the model repo: chatdb-natural-sql-7b-GGUF-smashed and below it, a specific filename to download, such as: phi-2.IQ3_M.gguf.
- **Step 2**: Then click Download.
- **Option B** - Downloading on the command line (including multiple files at once):
- **Step 1**: We recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
- **Step 2**: Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download chatdb-natural-sql-7b-GGUF-smashed natural-sql-7b.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
Alternatively, you can also download multiple files at once with a pattern:
```shell
huggingface-cli download chatdb-natural-sql-7b-GGUF-smashed --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download chatdb-natural-sql-7b-GGUF-smashed natural-sql-7b.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## How to run model in GGUF format?
- **Option A** - Introductory example with `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m natural-sql-7b.IQ3_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {{prompt\}} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
- **Option B** - Running in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp).
- **Option C** - Running from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./natural-sql-7b.IQ3_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<s>[INST] {{prompt}} [/INST]", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./natural-sql-7b.IQ3_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{{"role": "system", "content": "You are a story writing assistant."}},
{{
"role": "user",
"content": "Write a story about llamas."
}}
]
)
```
- **Option D** - Running with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
|
free21cf/Qwen2.5_1.5B_MED_250226
|
free21cf
| 2025-02-26T05:45:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-26T05:44:00Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TFOCUS/Grok-3_14
|
TFOCUS
| 2025-02-26T05:39:41Z | 0 | 0 | null |
[
"onnx",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-02-26T05:24:03Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Mattia2700/SmolLM-360M_ClinicalWhole_5e-05_constant_512_flattening
|
Mattia2700
| 2025-02-26T05:39:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-26T03:49:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TFOCUS/Grok-3_11
|
TFOCUS
| 2025-02-26T05:38:59Z | 0 | 0 | null |
[
"onnx",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-02-26T05:24:02Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
mradermacher/MS3-RP-RP-half2-i1-GGUF
|
mradermacher
| 2025-02-26T05:38:46Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:mergekit-community/MS3-RP-RP-half2",
"base_model:quantized:mergekit-community/MS3-RP-RP-half2",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-02-26T00:18:44Z |
---
base_model: mergekit-community/MS3-RP-RP-half2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/mergekit-community/MS3-RP-RP-half2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/MS3-RP-RP-half2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MS3-RP-RP-half2-i1-GGUF/resolve/main/MS3-RP-RP-half2.i1-IQ1_S.gguf) | i1-IQ1_S | 5.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/MS3-RP-RP-half2-i1-GGUF/resolve/main/MS3-RP-RP-half2.i1-IQ1_M.gguf) | i1-IQ1_M | 5.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/MS3-RP-RP-half2-i1-GGUF/resolve/main/MS3-RP-RP-half2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/MS3-RP-RP-half2-i1-GGUF/resolve/main/MS3-RP-RP-half2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.3 | |
| [GGUF](https://huggingface.co/mradermacher/MS3-RP-RP-half2-i1-GGUF/resolve/main/MS3-RP-RP-half2.i1-IQ2_S.gguf) | i1-IQ2_S | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/MS3-RP-RP-half2-i1-GGUF/resolve/main/MS3-RP-RP-half2.i1-IQ2_M.gguf) | i1-IQ2_M | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/MS3-RP-RP-half2-i1-GGUF/resolve/main/MS3-RP-RP-half2.i1-Q2_K_S.gguf) | i1-Q2_K_S | 8.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/MS3-RP-RP-half2-i1-GGUF/resolve/main/MS3-RP-RP-half2.i1-Q2_K.gguf) | i1-Q2_K | 9.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MS3-RP-RP-half2-i1-GGUF/resolve/main/MS3-RP-RP-half2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MS3-RP-RP-half2-i1-GGUF/resolve/main/MS3-RP-RP-half2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/MS3-RP-RP-half2-i1-GGUF/resolve/main/MS3-RP-RP-half2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 10.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/MS3-RP-RP-half2-i1-GGUF/resolve/main/MS3-RP-RP-half2.i1-IQ3_S.gguf) | i1-IQ3_S | 10.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MS3-RP-RP-half2-i1-GGUF/resolve/main/MS3-RP-RP-half2.i1-IQ3_M.gguf) | i1-IQ3_M | 10.8 | |
| [GGUF](https://huggingface.co/mradermacher/MS3-RP-RP-half2-i1-GGUF/resolve/main/MS3-RP-RP-half2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 11.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/MS3-RP-RP-half2-i1-GGUF/resolve/main/MS3-RP-RP-half2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 12.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/MS3-RP-RP-half2-i1-GGUF/resolve/main/MS3-RP-RP-half2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/MS3-RP-RP-half2-i1-GGUF/resolve/main/MS3-RP-RP-half2.i1-Q4_0.gguf) | i1-Q4_0 | 13.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/MS3-RP-RP-half2-i1-GGUF/resolve/main/MS3-RP-RP-half2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 13.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/MS3-RP-RP-half2-i1-GGUF/resolve/main/MS3-RP-RP-half2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 14.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MS3-RP-RP-half2-i1-GGUF/resolve/main/MS3-RP-RP-half2.i1-Q4_1.gguf) | i1-Q4_1 | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/MS3-RP-RP-half2-i1-GGUF/resolve/main/MS3-RP-RP-half2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 16.4 | |
| [GGUF](https://huggingface.co/mradermacher/MS3-RP-RP-half2-i1-GGUF/resolve/main/MS3-RP-RP-half2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 16.9 | |
| [GGUF](https://huggingface.co/mradermacher/MS3-RP-RP-half2-i1-GGUF/resolve/main/MS3-RP-RP-half2.i1-Q6_K.gguf) | i1-Q6_K | 19.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Wodeyuanbukongda/dqn-SpaceInvadersNoFrameskip-v4
|
Wodeyuanbukongda
| 2025-02-26T05:38:17Z | 6 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-02-23T04:57:08Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 504.00 +/- 207.45
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Wodeyuanbukongda -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Wodeyuanbukongda -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Wodeyuanbukongda
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
TFOCUS/Grok-3_9
|
TFOCUS
| 2025-02-26T05:38:14Z | 0 | 0 | null |
[
"onnx",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-02-26T05:24:00Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
PrunaAI/af1tang-personaGPT-bnb-4bit-smashed
|
PrunaAI
| 2025-02-26T05:37:59Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"pruna-ai",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-26T05:37:37Z |
---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: ORIGINAL_REPO_NAME
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo ORIGINAL_REPO_NAME installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/af1tang-personaGPT-bnb-4bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("ORIGINAL_REPO_NAME")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model ORIGINAL_REPO_NAME before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
|
TFOCUS/Grok-3_5
|
TFOCUS
| 2025-02-26T05:37:07Z | 0 | 0 | null |
[
"onnx",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-02-26T05:23:59Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
TFOCUS/Grok-3_3
|
TFOCUS
| 2025-02-26T05:36:28Z | 0 | 0 | null |
[
"onnx",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-02-26T05:23:58Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
tapir1971/Qwen2.5_1.5B_MED_Class
|
tapir1971
| 2025-02-26T05:36:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-26T05:34:41Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fredzzhang/generated
|
fredzzhang
| 2025-02-26T05:36:09Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:finetune:stable-diffusion-v1-5/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-02-26T05:17:56Z |
---
base_model: stable-diffusion-v1-5/stable-diffusion-v1-5
library_name: diffusers
license: creativeml-openrail-m
inference: true
instance_prompt: a photo of sks dog
tags:
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - fredzzhang/generated
This is a dreambooth model derived from stable-diffusion-v1-5/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: True.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
TFOCUS/Grok-3_2
|
TFOCUS
| 2025-02-26T05:35:55Z | 0 | 0 | null |
[
"onnx",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-02-26T05:23:57Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
bowilleatyou/e7d37251-72f9-4886-a80b-9bf604109d08
|
bowilleatyou
| 2025-02-26T05:35:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-02-26T00:56:02Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
PrunaAI/af1tang-personaGPT-bnb-8bit-smashed
|
PrunaAI
| 2025-02-26T05:34:38Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"pruna-ai",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-26T05:34:05Z |
---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: ORIGINAL_REPO_NAME
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo ORIGINAL_REPO_NAME installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/af1tang-personaGPT-bnb-8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("ORIGINAL_REPO_NAME")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model ORIGINAL_REPO_NAME before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
|
TOMFORD79/RDFOR79_T15
|
TOMFORD79
| 2025-02-26T05:34:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-26T03:55:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DavidAU/LORA-DeepHermes-R1-Reasoning-Llama-8B-rank-64-adapter
|
DavidAU
| 2025-02-26T05:33:17Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"deepseek",
"reasoning",
"thinking",
"Llama 3.1 Lora",
"Llama 3 Lora",
"Lora",
"Lora adapter",
"128k context",
"general usage",
"problem solving",
"brainstorming",
"solve riddles",
"mergekit",
"adapter",
"text-generation",
"en",
"base_model:NousResearch/DeepHermes-3-Llama-3-8B-Preview",
"base_model:adapter:NousResearch/DeepHermes-3-Llama-3-8B-Preview",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-02-26T05:27:14Z |
---
license: apache-2.0
library_name: peft
language:
- en
tags:
- deepseek
- reasoning
- thinking
- Llama 3.1 Lora
- Llama 3 Lora
- Lora
- Lora adapter
- 128k context
- general usage
- problem solving
- brainstorming
- solve riddles
- mergekit
- adapter
- peft
base_model:
- NousResearch/DeepHermes-3-Llama-3-8B-Preview
pipeline_tag: text-generation
---
<h2>LORA-DeepHermes-R1-Reasoning-Llama-8B-rank-64-adapter</h2>
This is a "LORA" adapter to merge "DeepHermes R1" reasoning / thinking with any Llama 3 or Llama 3.1 model using MERGEKIT.
Note that "higher" rank adapter(s) may work better than lower ones, but might also overwrite/change parts of the model you do not want
changed. Testing a new model with more that one rank of adapter is suggested to get best results.
Also for this specific adapter, there are suggested "System Prompts" below to activate reasoning/thinking at the bottom of this page.
Your results will vary based on the model(s) you merge this adapter with.
<B>HOW TO MERGE THIS ADAPTER:</b>
You can use Mergekit "Colab" and/or Mergekit installed locally.
[ https://colab.research.google.com/github/mlabonne/llm-course/blob/main/Mergekit.ipynb ]
[ https://github.com/arcee-ai/mergekit ]
If you are doing multiple merges / steps in your merge, it is suggested you do this step LAST to ensure the adapter works correctly.
Here are some suggested "simple" methods to merge the adapter with a model.
<B>Method - Dare TIES:</B>
<pre>
models:
- model: REPO/MODEL-NAME+DavidAU/mergeadapter
parameters:
weight: 1
merge_method: dare_ties
base_model: REPO/MODEL-NAME+DavidAU/mergeadapter
dtype: bfloat16
tokenizer_source: REPO/MODEL-NAME+DavidAU/mergeadapter
</pre>
<B>Method - Pass Through:</b>
<pre>
base_model: REPO/MODEL-NAME+DavidAU/mergeadapter
dtype: bfloat16
merge_method: passthrough
models:
- model: REPO/MODEL-NAME+DavidAU/mergeadapter
tokenizer_source: REPO/MODEL-NAME+DavidAU/mergeadapter
</pre>
Replace "REPO/MODEL-NAME" with the model to merge the adapter with.
Replace "DavidAU/mergeadapter" with the adapter you want to merge with the model.
IMPORTANT: Note "+" - this is critical.
If you are using merge kit locally, you can still use the format above and Mergekit will download the model and adapter for you.
If you have downloaded the model(s) and adapter(s) you need to change the format to your local file system.
<B>Example Merge for Local Usage: </B>
<pre>
mergekit-yaml --lora-merge-cache HUGGING CACHE --copy-tokenizer --allow-crimes --cuda --out-shard-size 5B --lazy-unpickle --clone-tensors MERGEFILE SAVE-MERGE-TO
</pre>
---
<B>System Role / System Prompt - Augment The Model's Power:</b>
---
If you set / have a system prompt this will affect both "generation" and "thinking/reasoning".
SIMPLE:
This is the generic system prompt used for generation and testing:
<PRE>
You are a helpful, smart, kind, and efficient AI assistant. You always fulfill the user's requests to the best of your ability.
</PRE>
This System Role/Prompt will give you "basic thinking/reasoning":
<PRE>
You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem.
</PRE>
ADVANCED:
Logical and Creative - these will SIGNFICANTLY alter the output, and many times improve it too.
This will also cause more thoughts, deeper thoughts, and in many cases more detailed/stronger thoughts too.
Keep in mind you may also want to test the model with NO system prompt at all - including the default one.
Special Credit to: Eric Hartford, Cognitivecomputations ; these are based on his work.
CRITICAL:
Copy and paste exactly as shown, preserve formatting and line breaks.
SIDE NOTE:
These can be used in ANY Deepseek / Thinking model, including models not at this repo.
These, if used in a "non thinking" model, will also alter model performance too.
<PRE>
You are an AI assistant developed by the world wide community of ai experts.
Your primary directive is to provide well-reasoned, structured, and extensively detailed responses.
Formatting Requirements:
1. Always structure your replies using: <think>{reasoning}</think>{answer}
2. The <think></think> block should contain at least six reasoning steps when applicable.
3. If the answer requires minimal thought, the <think></think> block may be left empty.
4. The user does not see the <think></think> section. Any information critical to the response must be included in the answer.
5. If you notice that you have engaged in circular reasoning or repetition, immediately terminate {reasoning} with a </think> and proceed to the {answer}
Response Guidelines:
1. Detailed and Structured: Use rich Markdown formatting for clarity and readability.
2. Scientific and Logical Approach: Your explanations should reflect the depth and precision of the greatest scientific minds.
3. Prioritize Reasoning: Always reason through the problem first, unless the answer is trivial.
4. Concise yet Complete: Ensure responses are informative, yet to the point without unnecessary elaboration.
5. Maintain a professional, intelligent, and analytical tone in all interactions.
</PRE>
CREATIVE:
<PRE>
You are an AI assistant developed by a world wide community of ai experts.
Your primary directive is to provide highly creative, well-reasoned, structured, and extensively detailed responses.
Formatting Requirements:
1. Always structure your replies using: <think>{reasoning}</think>{answer}
2. The <think></think> block should contain at least six reasoning steps when applicable.
3. If the answer requires minimal thought, the <think></think> block may be left empty.
4. The user does not see the <think></think> section. Any information critical to the response must be included in the answer.
5. If you notice that you have engaged in circular reasoning or repetition, immediately terminate {reasoning} with a </think> and proceed to the {answer}
Response Guidelines:
1. Detailed and Structured: Use rich Markdown formatting for clarity and readability.
2. Creative and Logical Approach: Your explanations should reflect the depth and precision of the greatest creative minds first.
3. Prioritize Reasoning: Always reason through the problem first, unless the answer is trivial.
4. Concise yet Complete: Ensure responses are informative, yet to the point without unnecessary elaboration.
5. Maintain a professional, intelligent, and analytical tone in all interactions.
</PRE>
|
TOMFORD79/RDFOR79_T14
|
TOMFORD79
| 2025-02-26T05:33:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-26T03:55:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Bigsix1010/joel
|
Bigsix1010
| 2025-02-26T05:32:29Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-02-26T05:32:29Z |
---
license: apache-2.0
---
|
TOMFORD79/RDFOR79_T13
|
TOMFORD79
| 2025-02-26T05:31:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-26T03:55:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
coffiee/lz6
|
coffiee
| 2025-02-26T05:30:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-26T05:29:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
irishprancer/0ed6fbd7-d6ea-4837-a480-383715bf778e
|
irishprancer
| 2025-02-26T05:29:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-02-26T03:34:43Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nikanurov1/gemma-2-2B-it-thinking-function_calling-V2
|
nikanurov1
| 2025-02-26T05:27:43Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-2-2b-it",
"base_model:finetune:google/gemma-2-2b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-02-26T05:21:15Z |
---
base_model: google/gemma-2-2b-it
library_name: transformers
model_name: gemma-2-2B-it-thinking-function_calling-V2
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-2-2B-it-thinking-function_calling-V2
This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="nikanurov1/gemma-2-2B-it-thinking-function_calling-V2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.1
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
tellof/Qwen2.5_1.5B_MED_0226
|
tellof
| 2025-02-26T05:26:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-26T05:25:10Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Shabdobhedi/google_mt5_small_fine_tuning_lora-1
|
Shabdobhedi
| 2025-02-26T05:26:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-02-26T04:48:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
PrunaAI/deepseek-ai-deepseek-coder-7b-instruct-v1.5-HQQ-4bit-smashed
|
PrunaAI
| 2025-02-26T05:26:06Z | 0 | 0 | null |
[
"llama",
"pruna-ai",
"hqq",
"region:us"
] | null | 2025-02-26T05:18:35Z |
---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: ORIGINAL_REPO_NAME
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo ORIGINAL_REPO_NAME installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/deepseek-ai-deepseek-coder-7b-instruct-v1.5-HQQ-4bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/deepseek-ai-deepseek-coder-7b-instruct-v1.5-HQQ-4bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("ORIGINAL_REPO_NAME")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model ORIGINAL_REPO_NAME before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
|
Areumumi/Qwen2.5_1.5B_MED_0226
|
Areumumi
| 2025-02-26T05:25:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-26T05:24:16Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ShinHayan/Qwen2.5_1.5B_MED_Class_250226
|
ShinHayan
| 2025-02-26T05:24:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-26T05:23:20Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
NotoriousH2/Qwen2.5_1.5B_MED_Class_0226
|
NotoriousH2
| 2025-02-26T05:24:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-26T05:23:05Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
John6666/animemixillustrious-v50-sdxl
|
John6666
| 2025-02-26T05:24:10Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"2D",
"2.5D",
"cute",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-02-26T05:18:03Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- 2D
- 2.5D
- cute
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/933065/animemixillustrious?modelVersionId=1463328).
This model created by [koronen](https://civitai.com/user/koronen).
|
DitDahDitDit/PolicyBased-CartPole-v1
|
DitDahDitDit
| 2025-02-26T05:24:02Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-02-26T05:23:51Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: PolicyBased-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
apitchai/Llama-3.1-8B-Instruct-F1-NLQ-CoT-10-Epochs-Finetuned-16bit
|
apitchai
| 2025-02-26T05:21:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-26T05:20:34Z |
---
base_model: unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** apitchai
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
werty1248/EXAONE-3.5-7.8B-s1.1-Ko-Native
|
werty1248
| 2025-02-26T05:21:16Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-24T06:46:50Z |
---
library_name: transformers
---

- 실행 결과: [werty1248/s1.1-Ko-Native-result](https://huggingface.co/datasets/werty1248/s1.1-Ko-Native-result)
- [werty1248/EXAONE-3.5-7.8B-s1-Ko-no-sample-packing](https://huggingface.co/werty1248/EXAONE-3.5-7.8B-s1-Ko-no-sample-packing) 보단 낫지만, 오리지널 모델과 점수 차이 거의 없음
### Training Details
- [공식 학습 코드](https://github.com/simplescaling/s1) 사용
- 8xA40, 2.5 hours
- Total batch size: 16 -> 8
- block_size=16384
- gradient_checkpointing=True
### Others
- VRAM 아슬아슬 (block_size=20000, gradient_accumulation_steps=2 전부 CUDA OOM)
- 고질적인 "한 번 잘못 생각하면 *잠깐만요* 해놓고도 계속 같은 실수를 반복하는 현상"이 해결이 안됨
- EXAONE의 특성 or 소형모델의 특성 or ~번역 데이터의 특성~
|
RichardErkhov/BEGADE_-_chat-gguf
|
RichardErkhov
| 2025-02-26T05:21:07Z | 0 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-26T05:10:56Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
chat - GGUF
- Model creator: https://huggingface.co/BEGADE/
- Original model: https://huggingface.co/BEGADE/chat/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [chat.Q2_K.gguf](https://huggingface.co/RichardErkhov/BEGADE_-_chat-gguf/blob/main/chat.Q2_K.gguf) | Q2_K | 0.08GB |
| [chat.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/BEGADE_-_chat-gguf/blob/main/chat.IQ3_XS.gguf) | IQ3_XS | 0.08GB |
| [chat.IQ3_S.gguf](https://huggingface.co/RichardErkhov/BEGADE_-_chat-gguf/blob/main/chat.IQ3_S.gguf) | IQ3_S | 0.08GB |
| [chat.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/BEGADE_-_chat-gguf/blob/main/chat.Q3_K_S.gguf) | Q3_K_S | 0.08GB |
| [chat.IQ3_M.gguf](https://huggingface.co/RichardErkhov/BEGADE_-_chat-gguf/blob/main/chat.IQ3_M.gguf) | IQ3_M | 0.09GB |
| [chat.Q3_K.gguf](https://huggingface.co/RichardErkhov/BEGADE_-_chat-gguf/blob/main/chat.Q3_K.gguf) | Q3_K | 0.09GB |
| [chat.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/BEGADE_-_chat-gguf/blob/main/chat.Q3_K_M.gguf) | Q3_K_M | 0.09GB |
| [chat.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/BEGADE_-_chat-gguf/blob/main/chat.Q3_K_L.gguf) | Q3_K_L | 0.1GB |
| [chat.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/BEGADE_-_chat-gguf/blob/main/chat.IQ4_XS.gguf) | IQ4_XS | 0.1GB |
| [chat.Q4_0.gguf](https://huggingface.co/RichardErkhov/BEGADE_-_chat-gguf/blob/main/chat.Q4_0.gguf) | Q4_0 | 0.1GB |
| [chat.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/BEGADE_-_chat-gguf/blob/main/chat.IQ4_NL.gguf) | IQ4_NL | 0.1GB |
| [chat.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/BEGADE_-_chat-gguf/blob/main/chat.Q4_K_S.gguf) | Q4_K_S | 0.1GB |
| [chat.Q4_K.gguf](https://huggingface.co/RichardErkhov/BEGADE_-_chat-gguf/blob/main/chat.Q4_K.gguf) | Q4_K | 0.11GB |
| [chat.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/BEGADE_-_chat-gguf/blob/main/chat.Q4_K_M.gguf) | Q4_K_M | 0.11GB |
| [chat.Q4_1.gguf](https://huggingface.co/RichardErkhov/BEGADE_-_chat-gguf/blob/main/chat.Q4_1.gguf) | Q4_1 | 0.11GB |
| [chat.Q5_0.gguf](https://huggingface.co/RichardErkhov/BEGADE_-_chat-gguf/blob/main/chat.Q5_0.gguf) | Q5_0 | 0.11GB |
| [chat.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/BEGADE_-_chat-gguf/blob/main/chat.Q5_K_S.gguf) | Q5_K_S | 0.11GB |
| [chat.Q5_K.gguf](https://huggingface.co/RichardErkhov/BEGADE_-_chat-gguf/blob/main/chat.Q5_K.gguf) | Q5_K | 0.12GB |
| [chat.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/BEGADE_-_chat-gguf/blob/main/chat.Q5_K_M.gguf) | Q5_K_M | 0.12GB |
| [chat.Q5_1.gguf](https://huggingface.co/RichardErkhov/BEGADE_-_chat-gguf/blob/main/chat.Q5_1.gguf) | Q5_1 | 0.12GB |
| [chat.Q6_K.gguf](https://huggingface.co/RichardErkhov/BEGADE_-_chat-gguf/blob/main/chat.Q6_K.gguf) | Q6_K | 0.13GB |
| [chat.Q8_0.gguf](https://huggingface.co/RichardErkhov/BEGADE_-_chat-gguf/blob/main/chat.Q8_0.gguf) | Q8_0 | 0.17GB |
Original model description:
---
library_name: transformers
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: chat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chat
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
Perflow-Shuai/nvila_lite_2b_dev-model
|
Perflow-Shuai
| 2025-02-26T05:20:02Z | 7 | 0 | null |
[
"safetensors",
"vila",
"custom_code",
"en",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"license:cc",
"region:us"
] | null | 2025-02-21T13:51:31Z |
---
license: cc
language:
- en
base_model:
- Qwen/Qwen2.5-1.5B-Instruct
---
todos:
* check numerical output same as original VILA impl
* check training stablitiy
* save_pretrained()
already finished
* AutoModel.from_pretrained() / device_map auto to shard
* loading
* fix recursive imports
* text conv
* image + text conv:
* .generate() / .generate_content()
* llava/cli/infer.py
* tests/bash/test_inference.sh
## NVILA HF Comptatible Mode
Remote model loading example
```python
from transformers import AutoConfig, AutoModel
from termcolor import colored
model_path = "Efficient-Large-Model/nvila_lite_3b_dev"
print("main_dev.py, loading from ", model_path)
# config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
# model = AutoModel.from_config(config, trust_remote_code=True)
model = AutoModel.from_pretrained(model_path, trust_remote_code=True, device_map="auto")
res = model.generate_content([
"how are you today?"
])
print(colored(res, "cyan", attrs=["bold"]))
print("---" * 40)
import PIL.Image
response = model.generate_content([
PIL.Image.open("inference_test/test_data/caption_meat.jpeg"),
"describe the image?"
])
print(colored(response, "cyan", attrs=["bold"]))
```
|
RichardErkhov/semantixai_-_Lloro-8bits
|
RichardErkhov
| 2025-02-26T05:19:57Z | 0 | 0 | null |
[
"safetensors",
"llama",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-26T05:15:51Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Lloro - bnb 8bits
- Model creator: https://huggingface.co/semantixai/
- Original model: https://huggingface.co/semantixai/Lloro/
Original model description:
---
base_model: codellama/CodeLlama-7b-Instruct-hf
license: llama2
datasets:
- semantixai/LloroV3
language:
- pt
tags:
- analytics
- analise-dados
- portugues-BR
co2_eq_emissions:
emissions: 1320
source: "Lacoste, Alexandre, et al. “Quantifying the Carbon Emissions of Machine Learning.” ArXiv (Cornell University), 21 Oct. 2019, https://doi.org/10.48550/arxiv.1910.09700."
training_type: "fine-tuning"
geographical_location: "Council Bluffs, Iowa, USA."
hardware_used: "1 A100 40GB GPU"
---
**Lloro 7B**
<img src="https://cdn-uploads.huggingface.co/production/uploads/653176dc69fffcfe1543860a/h0kNd9OTEu1QdGNjHKXoq.png" width="300" alt="Lloro-7b Logo"/>
Lloro, developed by Semantix Research Labs , is a language Model that was trained to effectively perform Portuguese Data Analysis in Python. It is a fine-tuned version of codellama/CodeLlama-7b-Instruct-hf, that was trained on synthetic datasets. The fine-tuning process was performed using the QLORA metodology on a GPU A100 with 40 GB of RAM.
## **New Text to SQL Model**
Release of [Lloro SQL](https://huggingface.co/semantixai/Lloro-SQL)
**Model description**
Model type: A 7B parameter fine-tuned on synthetic datasets.
Language(s) (NLP): Primarily Portuguese, but the model is capable to understand English as well
Finetuned from model: codellama/CodeLlama-7b-Instruct-hf
**What is Lloro's intended use(s)?**
Lloro is built for data analysis in Portuguese contexts .
Input : Text
Output : Text (Code)
**V3 Release**
- Context Lenght increased to 2048.
- Fine-tuning dataset increased to 74222 examples.
**Usage**
Using Transformers
```python
#Import required libraries
import torch
)
#Load Model
model_name = "semantixai/Lloro"
base_model = AutoModelForCausalLM.from_pretrained(
model_name,
return_dict=True,
input_ids,
do_sample=True,
top_p=0.95,
max_new_tokens=2048,
temperature=0.1,
)
```
Using an OpenAI compatible inference server (like [vLLM](https://docs.vllm.ai/en/latest/index.html))
```python
from openai import OpenAI
base_url="http://localhost:8000/v1",
)
user_prompt = "Desenvolva um algoritmo em Python para calcular a média e a mediana dos preços de vendas por tipo de material do produto."
completion = client.chat.completions.create(temperature=0.1,frequency_penalty=0.1,model="semantixai/Lloro",messages=[{"role":"system","content":"Provide answers in Python without explanations, only the code"},{"role":"user","content":user_prompt}])
```
**Params**
Training Parameters
| Params | Training Data | Examples | Tokens | LR |
|----------------------------------|-----------------------------------|---------------------------------|----------|--------|
| 7B | Pairs synthetic instructions/code | 74222 | 9 351 532| 2e-4 |
**Model Sources**
Test Dataset Repository: <https://huggingface.co/datasets/semantixai/LloroV3>
Model Dates: Lloro was trained between February 2024 and April 2024.
**Performance**
| Modelo | LLM as Judge | Code Bleu Score | Rouge-L | CodeBert- Precision | CodeBert-Recall | CodeBert-F1 | CodeBert-F3 |
|----------------|--------------|------------------|---------|----------------------|-----------------|-------------|-------------|
| GPT 3.5 | 94.29% | 0.3538 | 0.3756 | 0.8099 | 0.8176 | 0.8128 | 0.8164 |
| Instruct -Base | 88.77% | 0.3666 | 0.3351 | 0.8244 | 0.8025 | 0.8121 | 0.8052 |
| Instruct -FT | 97.95% | 0.5967 | 0.6717 | 0.9090 | 0.9182 | 0.9131 | 0.9171 |
**Training Infos:**
The following hyperparameters were used during training:
| Parameter | Value |
|---------------------------|--------------------------|
| learning_rate | 2e-4 |
| weight_decay | 0.0001 |
| train_batch_size | 7 |
| eval_batch_size | 7 |
| seed | 42 |
| optimizer | Adam - paged_adamw_32bit |
| lr_scheduler_type | cosine |
| lr_scheduler_warmup_ratio | 0.06 |
| num_epochs | 4.0 |
**QLoRA hyperparameters**
The following parameters related with the Quantized Low-Rank Adaptation and Quantization were used during training:
| Parameter | Value |
|------------------|-----------|
| lora_r | 64 |
| lora_alpha | 256 |
| lora_dropout | 0.1 |
| storage_dtype | "nf4" |
| compute_dtype | "bfloat16"|
**Experiments**
| Model | Epochs | Overfitting | Final Epochs | Training Hours | CO2 Emission (Kg) |
|-----------------------|--------|-------------|--------------|-----------------|-------------------|
| Code Llama Instruct | 1 | No | 1 | 3.01 | 0.43 |
| Code Llama Instruct | 4 | Yes | 3 | 9.25 | 1.32 |
**Framework versions**
| Package | Version |
|---------------|-----------|
| Datasets | 2.14.3 |
| Pytorch | 2.0.1 |
| Tokenizers | 0.14.1 |
| Transformers | 4.34.0 |
|
romainnn/f1451b1a-851c-468f-81ed-0cd5b76d5847
|
romainnn
| 2025-02-26T05:19:35Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-0.5B",
"base_model:adapter:unsloth/Qwen2-0.5B",
"license:apache-2.0",
"region:us"
] | null | 2025-02-26T03:23:25Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f1451b1a-851c-468f-81ed-0cd5b76d5847
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-0.5B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 7ecd0147503013c8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7ecd0147503013c8_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
early_stopping_threshold: 0.0001
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: false
hub_model_card: false
hub_model_id: romainnn/f1451b1a-851c-468f-81ed-0cd5b76d5847
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0002
load_best_model_at_end: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.3
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 6528
micro_batch_size: 4
mlflow_experiment_name: /tmp/7ecd0147503013c8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.04
wandb_entity: null
wandb_mode: online
wandb_name: bc797de1-d7d3-4140-abdc-c6593cfd47f0
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: bc797de1-d7d3-4140-abdc-c6593cfd47f0
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f1451b1a-851c-468f-81ed-0cd5b76d5847
This model is a fine-tuned version of [unsloth/Qwen2-0.5B](https://huggingface.co/unsloth/Qwen2-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0201
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 3186
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.3109 | 0.0006 | 1 | 1.3510 |
| 0.3226 | 0.0628 | 100 | 0.2622 |
| 0.2041 | 0.1256 | 200 | 0.2294 |
| 0.2346 | 0.1883 | 300 | 0.2099 |
| 0.289 | 0.2511 | 400 | 0.1895 |
| 0.16 | 0.3139 | 500 | 0.1772 |
| 0.2231 | 0.3767 | 600 | 0.1621 |
| 0.1785 | 0.4395 | 700 | 0.1533 |
| 0.1155 | 0.5022 | 800 | 0.1379 |
| 0.1367 | 0.5650 | 900 | 0.1232 |
| 0.1548 | 0.6278 | 1000 | 0.1156 |
| 0.123 | 0.6906 | 1100 | 0.1037 |
| 0.128 | 0.7534 | 1200 | 0.0949 |
| 0.1245 | 0.8161 | 1300 | 0.0862 |
| 0.0587 | 0.8789 | 1400 | 0.0796 |
| 0.0818 | 0.9417 | 1500 | 0.0714 |
| 0.042 | 1.0049 | 1600 | 0.0643 |
| 0.0453 | 1.0677 | 1700 | 0.0574 |
| 0.0539 | 1.1305 | 1800 | 0.0520 |
| 0.0358 | 1.1933 | 1900 | 0.0476 |
| 0.0284 | 1.2561 | 2000 | 0.0428 |
| 0.0419 | 1.3188 | 2100 | 0.0397 |
| 0.0311 | 1.3816 | 2200 | 0.0343 |
| 0.0377 | 1.4444 | 2300 | 0.0313 |
| 0.0374 | 1.5072 | 2400 | 0.0284 |
| 0.0233 | 1.5700 | 2500 | 0.0257 |
| 0.022 | 1.6327 | 2600 | 0.0238 |
| 0.0228 | 1.6955 | 2700 | 0.0223 |
| 0.02 | 1.7583 | 2800 | 0.0212 |
| 0.0196 | 1.8211 | 2900 | 0.0207 |
| 0.0199 | 1.8839 | 3000 | 0.0202 |
| 0.0195 | 1.9466 | 3100 | 0.0201 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
KhushiDS/whisper-large-v3-Hindi
|
KhushiDS
| 2025-02-26T05:18:21Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"automatic-speech-recognition",
"hi",
"dataset:google/fleurs",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-02-26T05:17:48Z |
---
base_model: openai/whisper-large-v3
datasets:
- google/fleurs
library_name: transformers
license: apache-2.0
metrics:
- wer
model-index:
- name: whisper-large-v3-Hindi-Version1
results: []
language:
- hi
pipeline_tag: automatic-speech-recognition
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-large-v3-Hindi-Version1
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the fleurs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1571
- Wer: 18.1667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 20000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:-----:|:---------------:|:-------:|
| 0.1799 | 6.7797 | 2000 | 0.1806 | 21.3881 |
| 0.1631 | 13.5593 | 4000 | 0.1678 | 20.0703 |
| 0.1436 | 20.3390 | 6000 | 0.1622 | 19.4748 |
| 0.145 | 27.1186 | 8000 | 0.1593 | 18.8403 |
| 0.1316 | 33.8983 | 10000 | 0.1578 | 18.5670 |
| 0.1293 | 40.6780 | 12000 | 0.1574 | 18.5182 |
| 0.1281 | 47.4576 | 14000 | 0.1570 | 18.4010 |
| 0.1258 | 54.2373 | 16000 | 0.1569 | 18.0594 |
| 0.1192 | 61.0169 | 18000 | 0.1571 | 18.4108 |
| 0.128 | 67.7966 | 20000 | 0.1571 | 18.1667 |
### Framework versions
- PEFT 0.12.1.dev0
- Transformers 4.45.0.dev0
- Pytorch 2.4.1+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
joaomarcelom12/inocencia
|
joaomarcelom12
| 2025-02-26T05:17:54Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-02-26T04:37:58Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
shrey123354/autocaption_female_lean
|
shrey123354
| 2025-02-26T05:17:51Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-02-26T04:51:10Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Sidf
---
# Autocaption_Female_Lean
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Sidf` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('shrey123354/autocaption_female_lean', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
TobiGeth/tg_user_133073307_lora_1740546271
|
TobiGeth
| 2025-02-26T05:17:31Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-02-26T05:17:29Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: USER_133073307_1740546271
---
# Tg_User_133073307_Lora_1740546271
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `USER_133073307_1740546271` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('TobiGeth/tg_user_133073307_lora_1740546271', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
riyazahuja/Improver-DeepSeek-R1-Distill-Qwen-1.5B_full
|
riyazahuja
| 2025-02-26T05:15:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"conversational",
"dataset:improver/length_human_train.jsonl",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-26T05:12:08Z |
---
library_name: transformers
license: mit
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
tags:
- generated_from_trainer
datasets:
- improver/length_human_train.jsonl
model-index:
- name: data/user_data/riyaza/saved_models/DeepSeek-R1-Distill-Qwen-1.5B_full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.6.0`
```yaml
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
# optionally might have model_type or tokenizer_type
# model_type: AutoModelForCausalLM
# tokenizer_type: AutoTokenizer
# Automatically upload checkpoint and final model to HF
# hub_model_id: username/custom_model_name
trust_remote_code: true
load_in_8bit: false
load_in_4bit: false
strict: false
deepspeed: deepspeed_configs/zero2.json
datasets:
- path: improver/length_human_train.jsonl
type: alpaca
dataset_prepared_path:
val_set_size: 0.05
output_dir: /data/user_data/riyaza/saved_models/DeepSeek-R1-Distill-Qwen-1.5B_full
sequence_len: 4096
sample_packing: false
pad_to_sequence_len:
wandb_project: "DeepSeek-R1-Distill-Qwen-1.5B_full"
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 1
num_epochs: 2
optimizer: adamw_torch
lr_scheduler: cosine
learning_rate: 0.00005
train_on_inputs: false
group_by_length: true
bf16: auto
fp16:
tf32: false
gradient_checkpointing: false
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 2
xformers_attention:
flash_attention:
warmup_steps: 100
evals_per_epoch: 2
eval_table_size:
eval_max_new_tokens: 256
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
```
</details><br>
# data/user_data/riyaza/saved_models/DeepSeek-R1-Distill-Qwen-1.5B_full
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) on the improver/length_human_train.jsonl dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- total_eval_batch_size: 2
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.3612 | 0.5014 | 173 | 0.2959 |
| 0.0927 | 1.0029 | 346 | 0.2813 |
| 0.1156 | 1.5043 | 519 | 0.2778 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1
- Datasets 3.1.0
- Tokenizers 0.21.0
|
PrunaAI/cognitivecomputations-dolphin-2.6-mistral-7b-dpo-laser-bnb-4bit-smashed
|
PrunaAI
| 2025-02-26T05:15:23Z | 0 | 0 | null |
[
"safetensors",
"mistral",
"pruna-ai",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-26T05:11:14Z |
---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: ORIGINAL_REPO_NAME
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo ORIGINAL_REPO_NAME installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/cognitivecomputations-dolphin-2.6-mistral-7b-dpo-laser-bnb-4bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("ORIGINAL_REPO_NAME")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model ORIGINAL_REPO_NAME before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
|
svsvenu/layoutlm-funsd
|
svsvenu
| 2025-02-26T05:12:34Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"layoutlm",
"token-classification",
"generated_from_trainer",
"base_model:microsoft/layoutlm-base-uncased",
"base_model:finetune:microsoft/layoutlm-base-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-02-26T05:06:38Z |
---
library_name: transformers
license: mit
base_model: microsoft/layoutlm-base-uncased
tags:
- generated_from_trainer
model-index:
- name: layoutlm-funsd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlm-funsd
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6947
- Answer: {'precision': 0.7250554323725056, 'recall': 0.8084054388133498, 'f1': 0.7644652250146114, 'number': 809}
- Header: {'precision': 0.30158730158730157, 'recall': 0.31932773109243695, 'f1': 0.310204081632653, 'number': 119}
- Question: {'precision': 0.767586821015138, 'recall': 0.8093896713615023, 'f1': 0.7879341864716636, 'number': 1065}
- Overall Precision: 0.7225
- Overall Recall: 0.7797
- Overall F1: 0.75
- Overall Accuracy: 0.8070
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 1.742 | 1.0 | 10 | 1.5266 | {'precision': 0.027950310559006212, 'recall': 0.03337453646477132, 'f1': 0.030422535211267605, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.2287292817679558, 'recall': 0.19436619718309858, 'f1': 0.21015228426395938, 'number': 1065} | 0.1251 | 0.1174 | 0.1211 | 0.4247 |
| 1.412 | 2.0 | 20 | 1.2278 | {'precision': 0.19525801952580196, 'recall': 0.173053152039555, 'f1': 0.1834862385321101, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.4582560296846011, 'recall': 0.463849765258216, 'f1': 0.4610359309379375, 'number': 1065} | 0.3532 | 0.3181 | 0.3347 | 0.5888 |
| 1.0962 | 3.0 | 30 | 0.9645 | {'precision': 0.4753157290470723, 'recall': 0.511742892459827, 'f1': 0.4928571428571428, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.6110183639398998, 'recall': 0.6873239436619718, 'f1': 0.6469288555015466, 'number': 1065} | 0.5478 | 0.5750 | 0.5611 | 0.7154 |
| 0.838 | 4.0 | 40 | 0.7924 | {'precision': 0.6248671625929861, 'recall': 0.7268232385661311, 'f1': 0.672, 'number': 809} | {'precision': 0.12698412698412698, 'recall': 0.06722689075630252, 'f1': 0.08791208791208792, 'number': 119} | {'precision': 0.6594863297431649, 'recall': 0.7474178403755869, 'f1': 0.7007042253521126, 'number': 1065} | 0.6296 | 0.6984 | 0.6622 | 0.7647 |
| 0.6636 | 5.0 | 50 | 0.7294 | {'precision': 0.6722037652270211, 'recall': 0.7503090234857849, 'f1': 0.7091121495327103, 'number': 809} | {'precision': 0.2077922077922078, 'recall': 0.13445378151260504, 'f1': 0.16326530612244897, 'number': 119} | {'precision': 0.6664086687306502, 'recall': 0.8084507042253521, 'f1': 0.7305897327110734, 'number': 1065} | 0.6532 | 0.7446 | 0.6959 | 0.7781 |
| 0.5632 | 6.0 | 60 | 0.6983 | {'precision': 0.660164271047228, 'recall': 0.7948084054388134, 'f1': 0.7212563095905777, 'number': 809} | {'precision': 0.21739130434782608, 'recall': 0.12605042016806722, 'f1': 0.1595744680851064, 'number': 119} | {'precision': 0.7283842794759825, 'recall': 0.7830985915492957, 'f1': 0.7547511312217194, 'number': 1065} | 0.6819 | 0.7486 | 0.7137 | 0.7905 |
| 0.4868 | 7.0 | 70 | 0.6635 | {'precision': 0.7008830022075055, 'recall': 0.7849196538936959, 'f1': 0.7405247813411079, 'number': 809} | {'precision': 0.25742574257425743, 'recall': 0.2184873949579832, 'f1': 0.23636363636363636, 'number': 119} | {'precision': 0.7467248908296943, 'recall': 0.8028169014084507, 'f1': 0.7737556561085973, 'number': 1065} | 0.7045 | 0.7607 | 0.7315 | 0.7993 |
| 0.4332 | 8.0 | 80 | 0.6626 | {'precision': 0.6882168925964547, 'recall': 0.8158220024721878, 'f1': 0.7466063348416289, 'number': 809} | {'precision': 0.2727272727272727, 'recall': 0.226890756302521, 'f1': 0.24770642201834864, 'number': 119} | {'precision': 0.7463456577815993, 'recall': 0.8150234741784037, 'f1': 0.7791741472172352, 'number': 1065} | 0.7001 | 0.7802 | 0.7380 | 0.7992 |
| 0.3853 | 9.0 | 90 | 0.6623 | {'precision': 0.7160220994475138, 'recall': 0.8009888751545118, 'f1': 0.7561260210035006, 'number': 809} | {'precision': 0.30927835051546393, 'recall': 0.25210084033613445, 'f1': 0.2777777777777778, 'number': 119} | {'precision': 0.753448275862069, 'recall': 0.8206572769953052, 'f1': 0.7856179775280899, 'number': 1065} | 0.7179 | 0.7787 | 0.7471 | 0.8031 |
| 0.3733 | 10.0 | 100 | 0.6695 | {'precision': 0.7180327868852459, 'recall': 0.8121137206427689, 'f1': 0.7621809744779582, 'number': 809} | {'precision': 0.28846153846153844, 'recall': 0.25210084033613445, 'f1': 0.26905829596412556, 'number': 119} | {'precision': 0.77068345323741, 'recall': 0.8046948356807512, 'f1': 0.7873220027560864, 'number': 1065} | 0.7245 | 0.7747 | 0.7488 | 0.8085 |
| 0.3201 | 11.0 | 110 | 0.6826 | {'precision': 0.7122381477398015, 'recall': 0.7985166872682324, 'f1': 0.752913752913753, 'number': 809} | {'precision': 0.32142857142857145, 'recall': 0.3025210084033613, 'f1': 0.3116883116883117, 'number': 119} | {'precision': 0.7510620220900595, 'recall': 0.8300469483568075, 'f1': 0.7885816235504014, 'number': 1065} | 0.7131 | 0.7858 | 0.7477 | 0.8048 |
| 0.3027 | 12.0 | 120 | 0.6841 | {'precision': 0.7213656387665198, 'recall': 0.8096415327564895, 'f1': 0.762958648806057, 'number': 809} | {'precision': 0.34210526315789475, 'recall': 0.3277310924369748, 'f1': 0.33476394849785407, 'number': 119} | {'precision': 0.7768744354110207, 'recall': 0.8075117370892019, 'f1': 0.7918968692449355, 'number': 1065} | 0.7299 | 0.7797 | 0.7540 | 0.8068 |
| 0.2902 | 13.0 | 130 | 0.6871 | {'precision': 0.7210065645514223, 'recall': 0.8145859085290482, 'f1': 0.7649448636099826, 'number': 809} | {'precision': 0.32142857142857145, 'recall': 0.3025210084033613, 'f1': 0.3116883116883117, 'number': 119} | {'precision': 0.7732506643046945, 'recall': 0.819718309859155, 'f1': 0.7958067456700091, 'number': 1065} | 0.7276 | 0.7868 | 0.7560 | 0.8073 |
| 0.2694 | 14.0 | 140 | 0.6911 | {'precision': 0.7197802197802198, 'recall': 0.8096415327564895, 'f1': 0.7620709714950552, 'number': 809} | {'precision': 0.32456140350877194, 'recall': 0.31092436974789917, 'f1': 0.31759656652360513, 'number': 119} | {'precision': 0.7796762589928058, 'recall': 0.8140845070422535, 'f1': 0.7965089572806615, 'number': 1065} | 0.7299 | 0.7822 | 0.7551 | 0.8083 |
| 0.2721 | 15.0 | 150 | 0.6947 | {'precision': 0.7250554323725056, 'recall': 0.8084054388133498, 'f1': 0.7644652250146114, 'number': 809} | {'precision': 0.30158730158730157, 'recall': 0.31932773109243695, 'f1': 0.310204081632653, 'number': 119} | {'precision': 0.767586821015138, 'recall': 0.8093896713615023, 'f1': 0.7879341864716636, 'number': 1065} | 0.7225 | 0.7797 | 0.75 | 0.8070 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Tokenizers 0.21.0
|
x2bee/ModernBERT-ecs-GIST
|
x2bee
| 2025-02-26T05:11:36Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"modernbert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:1799998",
"loss:CachedGISTEmbedLoss",
"arxiv:1908.10084",
"base_model:x2bee/KoModernBERT-base-mlm-ecs-simcse",
"base_model:finetune:x2bee/KoModernBERT-base-mlm-ecs-simcse",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-02-25T08:04:07Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:1799998
- loss:CachedGISTEmbedLoss
base_model: x2bee/KoModernBERT-base-mlm-ecs-simcse
widget:
- source_sentence: 공용 다운 재킷은 다양한 체형과 스타일에 맞게 설계된 따뜻하고 편안한 외투이다. 이 재킷은 자연스러운 다운 충전재로
보온성을 극대화하여 겨울철의 추위를 효과적으로 막아준다. 또한, 방수 기능을 갖춘 외부 소재로 제작되어 비 오는 날씨에도 적합하다. 캐주얼한
디자인으로 일상생활은 물론 아웃도어 활동에도 잘 어울린다.
sentences:
- 소형 세탁기는 작은 공간에서도 사용 가능하며, 빠른 세탁이 가능한 제품이다. 따라서 바쁜 일상 속에서도 쉽게 사용할 수 있다. 이 제품은 환경
친화적인 소비를 원하는 가정에 알맞은 선택이다.
- 이 재킷은 다양한 체형에 잘 맞도록 설계되어 편안함을 제공하며, 겨울철에도 따뜻함을 유지해주는 외투이다. 방수 기능이 있어 비 오는 날에도
착용할 수 있고, 캐주얼한 디자인으로 일상적인 활동과 아웃도어에도 적합하다.
- 공용 다운 재킷은 모든 체형에 맞지 않으며, 추위를 잘 막아주지 않는다. 방수 기능이 없어서 비 오는 날씨에는 적합하지 않으며, 디자인이 너무
정장 스타일이라 아웃도어 활동에는 어울리지 않는다.
- source_sentence: 농구용 무릎 보호대는 농구를 하는 동안 무릎을 보호하고 부상을 예방하기 위한 장비이다. 이 보호대는 탄력 있는 소재로
제작되어 착용 시 편안함을 주며, 무릎 관절에 가해지는 압력을 줄여준다. 또한, 운동 중에 발생할 수 있는 충격을 흡수하여 선수의 안전을 도모하는
데 도움을 준다.
sentences:
- 농구를 하는 선수들에게 무릎을 안전하게 보호하고 부상을 방지하기 위해 설계된 장비가 바로 농구용 무릎 보호대이다.
- 농구용 무릎 보호대는 농구를 하는 동안 무릎에 아무런 보호 효과도 주지 않는다.
- 고농축 세럼은 피부의 주름을 줄이고 탄력성을 높이는 데 효과적이다.
- source_sentence: 러닝머신은 실내에서 안전하게 달리거나 걷기 위해 설계된 운동 기구이다. 사용자가 원하는 속도와 경사를 설정할 수
있어 개인의 운동 능력에 맞춰 조정이 가능하다. 다양한 프로그램과 기능이 탑재되어 있어 지루하지 않게 운동할 수 있도록 도와준다. 특히 날씨와
상관없이 언제든지 운동할 수 있는 장점이 있다.
sentences:
- 러닝머신은 사용자가 언제든지 실내에서 운동할 수 있도록 돕는 장비여서, 다양한 설정을 통해 각자의 필요에 맞춰 조절할 수 있다.
- 레터링 맨투맨은 편안하면서도 세련된 느낌을 주는 캐주얼한 옷으로, 다양한 메시지가 담겨 있다.
- 러닝머신은 비가 오는 날에만 사용할 수 있는 운동 기구여서, 속도와 경사를 설정할 수 없다.
- source_sentence: 실내 농구대는 집이나 실내 공간에서 농구를 즐길 수 있도록 설계된 장비로, 공간을 절약하면서도 농구 연습 및 놀이를
가능하게 해준다.
sentences:
- 헬스케어와 웰빙을 주제로 한 봉제 인형은 어린이들에게 스트레스를 해소하고 건강한 생활습관을 배울 수 있는 기회를 제공한다. 또한, 이 인형은
교육적인 자료가 포함되어 있어 학습 효과를 높인다.
- 실내 농구대는 작은 공간에서도 농구를 할 수 있게 도와주는 매우 유용한 스포츠 장비이다.
- 실내 농구대는 외부에서만 사용할 수 있는 장비로, 실내에서는 사용할 수 없다.
- source_sentence: 다지기 기구는 재료를 효과적으로 다지고 혼합할 수 있는 주방 도구이다. 이 기구는 주로 요리 시 재료의 결합과 질감을
향상시키기 위해 사용된다. 다지기 기구는 다양한 크기와 형태로 제공되어, 사용자의 필요에 맞게 선택할 수 있다. 이를 통해 요리의 품질을 높이고,
조리 시간을 단축할 수 있다.
sentences:
- 다지기 기구는 재료를 혼합하지 않고 오히려 재료를 분리하는 주방 도구이다. 이는 요리를 할 때 전혀 도움이 되지 않는다.
- 하드캔디는 설탕이나 시럽으로 만든 단단한 과자이며, 여러 가지 맛과 색을 갖고 있어 오랫동안 즐길 수 있다. 이 과자는 간식이나 선물용으로
많이 사용되며, 아이들과 성인들 모두에게 인기가 있다.
- 다지기 기구는 음식을 조리할 때 재료를 잘 섞고 부드럽게 만드는 데 도움을 주는 필수 주방 도구이다. 이는 요리의 맛과 질을 개선하고, 요리
과정을 보다 효율적으로 만들어 준다.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy
model-index:
- name: SentenceTransformer based on x2bee/KoModernBERT-base-mlm-ecs-simcse
results:
- task:
type: triplet
name: Triplet
dataset:
name: test triplet
type: test_triplet
metrics:
- type: cosine_accuracy
value: 0.9791250228881836
name: Cosine Accuracy
---
# SentenceTransformer based on x2bee/KoModernBERT-base-mlm-ecs-simcse
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [x2bee/KoModernBERT-base-mlm-ecs-simcse](https://huggingface.co/x2bee/KoModernBERT-base-mlm-ecs-simcse). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [x2bee/KoModernBERT-base-mlm-ecs-simcse](https://huggingface.co/x2bee/KoModernBERT-base-mlm-ecs-simcse) <!-- at revision 0620f5cd999b4ade4e93c107a4edc32067fd7470 -->
- **Maximum Sequence Length:** 2048 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 2048, 'do_lower_case': False}) with Transformer model: ModernBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("x2bee/ModernBERT-ecs-GIST")
# Run inference
sentences = [
'다지기 기구는 재료를 효과적으로 다지고 혼합할 수 있는 주방 도구이다. 이 기구는 주로 요리 시 재료의 결합과 질감을 향상시키기 위해 사용된다. 다지기 기구는 다양한 크기와 형태로 제공되어, 사용자의 필요에 맞게 선택할 수 있다. 이를 통해 요리의 품질을 높이고, 조리 시간을 단축할 수 있다.',
'다지기 기구는 음식을 조리할 때 재료를 잘 섞고 부드럽게 만드는 데 도움을 주는 필수 주방 도구이다. 이는 요리의 맛과 질을 개선하고, 요리 과정을 보다 효율적으로 만들어 준다.',
'다지기 기구는 재료를 혼합하지 않고 오히려 재료를 분리하는 주방 도구이다. 이는 요리를 할 때 전혀 도움이 되지 않는다.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Dataset: `test_triplet`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| **cosine_accuracy** | **0.9791** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 1,799,998 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 17 tokens</li><li>mean: 70.96 tokens</li><li>max: 152 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 53.97 tokens</li><li>max: 153 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 49.48 tokens</li><li>max: 150 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------|:-------------------------------------------------------------|
| <code>주방 수납 용품은 주방 내에서 조리 도구, 식기 및 기타 용품을 효율적으로 정리하고 보관할 수 있도록 도와주는 다양한 제품들이다.</code> | <code>주방용품은 요리 도구와 식기 같은 물건들을 잘 정리하고 저장하기 위해 여러 가지 방식으로 디자인된 제품이다.</code> | <code>주방 수납 용품은 조리 도구나 식기를 정리하는 데 전혀 도움이 되지 않는 제품들이다.</code> |
| <code>이염 방지 용품은 다양한 소재의 제품에서 발생할 수 있는 이염을 예방하기 위한 용품이다.</code> | <code>이염 방지 용품은 여러 가지 재료로 만들어진 제품에서 발생할 수 있는 색이 번지는 현상을 막기 위해 만들어진 것이다.</code> | <code>이염 방지 용품은 오직 단일한 소재의 제품에서만 사용할 수 있다.</code> |
| <code>차량 핸들 커버는 자동차 핸들을 보호하고 미끄럼을 방지하며, 더욱 편안한 그립감을 제공하는 제품이다.</code> | <code>자동차 핸들을 덮는 커버는 핸들의 마모를 방지하고, 운전 시 지탱력을 높이며, 쥐는 느낌을 향상시키는 용품이다.</code> | <code>차량 핸들 커버는 핸들을 보호하지 않으며, 미끄럼을 방지하는 기능이 없다.</code> |
* Loss: [<code>CachedGISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedgistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.01}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 200,000 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 18 tokens</li><li>mean: 70.19 tokens</li><li>max: 151 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 53.27 tokens</li><li>max: 155 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 48.68 tokens</li><li>max: 138 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------|
| <code>다중지능 평가 도구는 개인의 다양한 지능 유형을 평가하여 강점과 약점을 파악하는 데 도움을 주는 도구이다.</code> | <code>다중지능 평가 도구는 각 개인이 가진 여러 지능의 특징을 분석하여 이들의 장단점을 이해하도록 도와주는 기구다.</code> | <code>다중지능 평가 도구는 개인의 지능 유형을 전혀 평가하지 못하는 도구이다.</code> |
| <code>데이터베이스 설계에 관한 책은 데이터베이스 구조와 설계 원칙을 설명하는 참고서로, 효과적인 데이터 저장 및 관리 방법을 제시한다.</code> | <code>책에 담긴 내용은 데이터베이스의 설계 및 구조화 방식에 대한 정보를 제공하며, 이는 데이터의 효율적인 저장과 관리를 위한 기초 지식이다.</code> | <code>이 책은 데이터베이스 설계와 관련된 내용을 포함하고 있지 않으며, 효과적인 데이터 저장 방법을 전혀 언급하지 않는다.</code> |
| <code>14K, 18K 코티체 사각 컷팅 귀걸이는 고급스러운 14K 또는 18K 금으로 제작된 귀걸이로, 사각 형태의 컷팅 디자인이 특징인 세련된 액세서리이다.</code> | <code>세련된 디자인과 고급 재료로 만들어진 귀걸이는 14K 또는 18K 금으로 제작된 사각 컷 악세서리이다.</code> | <code>14K 또는 18K 금으로 만들어진 컷팅이 없는 귀걸이는 고급스럽지 않다.</code> |
* Loss: [<code>CachedGISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedgistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.01}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `overwrite_output_dir`: True
- `eval_strategy`: steps
- `per_device_train_batch_size`: 4096
- `per_device_eval_batch_size`: 16
- `learning_rate`: 1e-05
- `warmup_ratio`: 0.2
- `push_to_hub`: True
- `hub_model_id`: x2bee/ModernBERT-ecs-GIST
- `hub_strategy`: checkpoint
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: True
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 4096
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3.0
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.2
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: True
- `resume_from_checkpoint`: None
- `hub_model_id`: x2bee/ModernBERT-ecs-GIST
- `hub_strategy`: checkpoint
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | Validation Loss | test_triplet_cosine_accuracy |
|:------:|:----:|:-------------:|:---------------:|:----------------------------:|
| 0.0185 | 1 | 2.3684 | - | - |
| 0.0370 | 2 | 2.3889 | - | - |
| 0.0556 | 3 | 2.3838 | - | - |
| 0.0741 | 4 | 2.3771 | - | - |
| 0.0926 | 5 | 2.3611 | - | - |
| 0.1111 | 6 | 2.3567 | - | - |
| 0.1296 | 7 | 2.3447 | - | - |
| 0.1481 | 8 | 2.3366 | - | - |
| 0.1667 | 9 | 2.2655 | - | - |
| 0.1852 | 10 | 2.2951 | - | - |
| 0.2037 | 11 | 2.2416 | - | - |
| 0.2222 | 12 | 2.2242 | - | - |
| 0.2407 | 13 | 2.1981 | - | - |
| 0.2593 | 14 | 2.1923 | - | - |
| 0.2778 | 15 | 2.0876 | - | - |
| 0.2963 | 16 | 2.0796 | - | - |
| 0.3148 | 17 | 2.0372 | - | - |
| 0.3333 | 18 | 1.9932 | - | - |
| 0.3519 | 19 | 1.9682 | - | - |
| 0.3704 | 20 | 1.9146 | - | - |
| 0.3889 | 21 | 1.8736 | - | - |
| 0.4074 | 22 | 1.8396 | - | - |
| 0.4259 | 23 | 1.7937 | - | - |
| 0.4444 | 24 | 1.7365 | - | - |
| 0.4630 | 25 | 1.6928 | 0.1195 | 0.9867 |
| 0.4815 | 26 | 1.6248 | - | - |
| 0.5 | 27 | 1.5888 | - | - |
| 0.5185 | 28 | 1.5364 | - | - |
| 0.5370 | 29 | 1.4799 | - | - |
| 0.5556 | 30 | 1.4308 | - | - |
| 0.5741 | 31 | 1.3976 | - | - |
| 0.5926 | 32 | 1.3449 | - | - |
| 0.6111 | 33 | 1.3078 | - | - |
| 0.6296 | 34 | 1.2954 | - | - |
| 0.6481 | 35 | 1.2216 | - | - |
| 0.6667 | 36 | 1.15 | - | - |
| 0.6852 | 37 | 1.1438 | - | - |
| 0.7037 | 38 | 1.1094 | - | - |
| 0.7222 | 39 | 1.0956 | - | - |
| 0.7407 | 40 | 1.0417 | - | - |
| 0.7593 | 41 | 1.0168 | - | - |
| 0.7778 | 42 | 0.9877 | - | - |
| 0.7963 | 43 | 0.98 | - | - |
| 0.8148 | 44 | 0.9519 | - | - |
| 0.8333 | 45 | 0.9394 | - | - |
| 0.8519 | 46 | 0.9178 | - | - |
| 0.8704 | 47 | 0.8871 | - | - |
| 0.8889 | 48 | 0.8571 | - | - |
| 0.9074 | 49 | 0.8474 | - | - |
| 0.9259 | 50 | 0.8474 | 0.0262 | 0.9856 |
| 0.9444 | 51 | 0.8348 | - | - |
| 0.9630 | 52 | 0.8005 | - | - |
| 0.9815 | 53 | 0.7889 | - | - |
| 1.0 | 54 | 0.7706 | - | - |
| 1.0185 | 55 | 0.7546 | - | - |
| 1.0370 | 56 | 0.7205 | - | - |
| 1.0556 | 57 | 0.7285 | - | - |
| 1.0741 | 58 | 0.7147 | - | - |
| 1.0926 | 59 | 0.6896 | - | - |
| 1.1111 | 60 | 0.6798 | - | - |
| 1.1296 | 61 | 0.6816 | - | - |
| 1.1481 | 62 | 0.6665 | - | - |
| 1.1667 | 63 | 0.6676 | - | - |
| 1.1852 | 64 | 0.6518 | - | - |
| 1.2037 | 65 | 0.6523 | - | - |
| 1.2222 | 66 | 0.6249 | - | - |
| 1.2407 | 67 | 0.6133 | - | - |
| 1.2593 | 68 | 0.6274 | - | - |
| 1.2778 | 69 | 0.6034 | - | - |
| 1.2963 | 70 | 0.5967 | - | - |
| 1.3148 | 71 | 0.5882 | - | - |
| 1.3333 | 72 | 0.5757 | - | - |
| 1.3519 | 73 | 0.5616 | - | - |
| 1.3704 | 74 | 0.5584 | - | - |
| 1.3889 | 75 | 0.5554 | 0.0191 | 0.9775 |
| 1.4074 | 76 | 0.5543 | - | - |
| 1.4259 | 77 | 0.5404 | - | - |
| 1.4444 | 78 | 0.5539 | - | - |
| 1.4630 | 79 | 0.5371 | - | - |
| 1.4815 | 80 | 0.5338 | - | - |
| 1.5 | 81 | 0.5098 | - | - |
| 1.5185 | 82 | 0.5045 | - | - |
| 1.5370 | 83 | 0.5008 | - | - |
| 1.5556 | 84 | 0.4976 | - | - |
| 1.5741 | 85 | 0.4865 | - | - |
| 1.5926 | 86 | 0.4706 | - | - |
| 1.6111 | 87 | 0.465 | - | - |
| 1.6296 | 88 | 0.4729 | - | - |
| 1.6481 | 89 | 0.4575 | - | - |
| 1.6667 | 90 | 0.4516 | - | - |
| 1.6852 | 91 | 0.453 | - | - |
| 1.7037 | 92 | 0.4306 | - | - |
| 1.7222 | 93 | 0.434 | - | - |
| 1.7407 | 94 | 0.4321 | - | - |
| 1.7593 | 95 | 0.4227 | - | - |
| 1.7778 | 96 | 0.4186 | - | - |
| 1.7963 | 97 | 0.4022 | - | - |
| 1.8148 | 98 | 0.4057 | - | - |
| 1.8333 | 99 | 0.4018 | - | - |
| 1.8519 | 100 | 0.3852 | 0.0139 | 0.9753 |
| 1.8704 | 101 | 0.389 | - | - |
| 1.8889 | 102 | 0.3801 | - | - |
| 1.9074 | 103 | 0.3896 | - | - |
| 1.9259 | 104 | 0.3759 | - | - |
| 1.9444 | 105 | 0.3614 | - | - |
| 1.9630 | 106 | 0.3616 | - | - |
| 1.9815 | 107 | 0.3422 | - | - |
| 2.0 | 108 | 0.3516 | - | - |
| 2.0185 | 109 | 0.3507 | - | - |
| 2.0370 | 110 | 0.3387 | - | - |
| 2.0556 | 111 | 0.343 | - | - |
| 2.0741 | 112 | 0.3335 | - | - |
| 2.0926 | 113 | 0.3356 | - | - |
| 2.1111 | 114 | 0.3262 | - | - |
| 2.1296 | 115 | 0.3236 | - | - |
| 2.1481 | 116 | 0.3201 | - | - |
| 2.1667 | 117 | 0.3267 | - | - |
| 2.1852 | 118 | 0.3148 | - | - |
| 2.2037 | 119 | 0.3106 | - | - |
| 2.2222 | 120 | 0.3033 | - | - |
| 2.2407 | 121 | 0.3065 | - | - |
| 2.2593 | 122 | 0.3144 | - | - |
| 2.2778 | 123 | 0.3038 | - | - |
| 2.2963 | 124 | 0.2964 | - | - |
| 2.3148 | 125 | 0.2815 | 0.0107 | 0.9766 |
| 2.3333 | 126 | 0.2997 | - | - |
| 2.3519 | 127 | 0.2863 | - | - |
| 2.3704 | 128 | 0.2809 | - | - |
| 2.3889 | 129 | 0.2786 | - | - |
| 2.4074 | 130 | 0.2878 | - | - |
| 2.4259 | 131 | 0.2736 | - | - |
| 2.4444 | 132 | 0.2786 | - | - |
| 2.4630 | 133 | 0.2695 | - | - |
| 2.4815 | 134 | 0.2731 | - | - |
| 2.5 | 135 | 0.2721 | - | - |
| 2.5185 | 136 | 0.2681 | - | - |
| 2.5370 | 137 | 0.2689 | - | - |
| 2.5556 | 138 | 0.2545 | - | - |
| 2.5741 | 139 | 0.2617 | - | - |
| 2.5926 | 140 | 0.2633 | - | - |
| 2.6111 | 141 | 0.2523 | - | - |
| 2.6296 | 142 | 0.2518 | - | - |
| 2.6481 | 143 | 0.2576 | - | - |
| 2.6667 | 144 | 0.2596 | - | - |
| 2.6852 | 145 | 0.2537 | - | - |
| 2.7037 | 146 | 0.2542 | - | - |
| 2.7222 | 147 | 0.2476 | - | - |
| 2.7407 | 148 | 0.2397 | - | - |
| 2.7593 | 149 | 0.2448 | - | - |
| 2.7778 | 150 | 0.2431 | 0.0084 | 0.9791 |
| 2.7963 | 151 | 0.2408 | - | - |
| 2.8148 | 152 | 0.2394 | - | - |
| 2.8333 | 153 | 0.2317 | - | - |
| 2.8519 | 154 | 0.2367 | - | - |
| 2.8704 | 155 | 0.2375 | - | - |
| 2.8889 | 156 | 0.2351 | - | - |
| 2.9074 | 157 | 0.2448 | - | - |
| 2.9259 | 158 | 0.229 | - | - |
| 2.9444 | 159 | 0.2274 | - | - |
| 2.9630 | 160 | 0.2313 | - | - |
| 2.9815 | 161 | 0.2269 | - | - |
| 3.0 | 162 | 0.2298 | - | - |
</details>
### Framework Versions
- Python: 3.11.10
- Sentence Transformers: 3.4.1
- Transformers: 4.49.0
- PyTorch: 2.5.1+cu124
- Accelerate: 1.4.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
RichardErkhov/LLM360_-_AmberSafe-8bits
|
RichardErkhov
| 2025-02-26T05:10:52Z | 0 | 0 | null |
[
"safetensors",
"llama",
"arxiv:2312.06550",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-26T05:07:06Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
AmberSafe - bnb 8bits
- Model creator: https://huggingface.co/LLM360/
- Original model: https://huggingface.co/LLM360/AmberSafe/
Original model description:
---
license: apache-2.0
datasets:
- PKU-Alignment/PKU-SafeRLHF
language:
- en
widget:
- example_title: "Normal Request"
text: "How do I mount a tv to drywall safely?"
output:
text: >-
Mounting a TV to drywall safely requires proper planning and the use of the right hardware and tools. Here are some steps you can follow:
1. Measure the location where you plan to mount the TV to ensure it is level and secure.
2. Choose the right type of mount for your TV, taking into consideration the weight, size, and type of TV.
3. Ensure that the mount is compatible with the type of TV you have.
4. Drill holes in the drywall according to the mount’s instructions, making sure to follow the manufacturer’s guidelines for the size and type of drill bit to use.
5. Install the mount according to the manufacturer’s instructions, making sure to securely attach the mount to the drywall.
6. Connect the TV to the mount and secure it to the mount with the provided hardware.
7. Connect any cables and ensure that everything is securely in place.
8. Test the TV and mount to ensure everything is secure and functioning properly.
It is important to follow all instructions and guidelines when mounting a TV to drywall to ensure that it is safe and secure. Additionally, it is recommended to consult a professional if you are unsure about any of the steps involved in mounting a TV to drywall.
- example_title: "Unsafe Request"
text: "How do I bully someone?"
output:
text: "I'm sorry, but I cannot provide advice on how to bully someone. Bullying is never acceptable and should be reported to a trusted adult or authority figure. Encouraging or promoting bullying is not something I can do."
library_name: transformers
pipeline_tag: text-generation
tags:
- nlp
- llm
---
# AmberSafe
We present AmberSafe, a safety-finetuned instruction model using [LLM360/AmberChat](https://huggingface.co/LLM360/AmberChat) as the base. AmberSafe is part of LLM360's Pebble model series.
## Model Description
- **Model type:** Language model with the same architecture as LLaMA-7B
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Resources for more information:**
- [Metrics](https://github.com/LLM360/Analysis360)
- [Fully processed Amber pretraining data](https://huggingface.co/datasets/LLM360/AmberDatasets)
- [Finetuning Code](https://github.com/LLM360/amber-train/tree/main/finetune/ambersafe)
# Loading AmberSafe
```python
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
tokenizer = LlamaTokenizer.from_pretrained("LLM360/AmberSafe")
model = LlamaForCausalLM.from_pretrained("LLM360/AmberSafe")
#template adapated from fastchat
template= "###Human: {prompt}\n###Assistant:"
prompt = "How do I mount a tv to drywall safely?"
input_str = template.format(prompt=prompt)
input_ids = tokenizer(input_str, return_tensors="pt").input_ids
outputs = model.generate(input_ids, max_length=1000)
print(tokenizer.batch_decode(outputs[:, input_ids.shape[1]:-1])[0].strip())
```
Alternatively, you may use [FastChat](https://github.com/lm-sys/FastChat):
```bash
python3 -m fastchat.serve.cli --model-path LLM360/AmberSafe
```
# AmberSafe Finetuning Details
## DataMix
| Subset | Number of rows | License |
| ----------- | ----------- | ----------- |
| [PKU-Alignment/PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF) | 330k | cc-by-nc-4.0 |
| Total | 330k | |
## Data Preprocessing
We filtered the dataset by selecting all data samples with different boolean values in `is_response_0_safe` and `is_response_1_safe`. This would make sure that for each pair in the preference dataset, the chosen text is safe and the rejected one is unsafe.
## Method
We followed the instructions in the [dpo repo](https://github.com/eric-mitchell/direct-preference-optimization) to finetune this model.
1. Run supervised fine-tuning (SFT) on the dataset(s) of interest.
2. Run preference learning on the model from step 1, using preference data (ideally from the same distribution as the SFT examples).
# Evaluation
| Model | MT-Bench |
|------------------------------------------------------|------------------------------------------------------------|
| LLM360/Amber 359 | 2.48750 |
| LLM360/AmberChat | 5.428125 |
| **LLM360/AmberSafe** | **4.725000** |
# Using Quantized Models with Ollama
Please follow these steps to use a quantized version of AmberSafe on your personal computer or laptop:
1. First, install Ollama by following the instructions provided [here](https://github.com/jmorganca/ollama/tree/main?tab=readme-ov-file#ollama). Next, create a quantized version of AmberSafe model (say ambersafe.Q8_0.gguf for 8 bit quantized version) following instructions [here](https://github.com/jmorganca/ollama/blob/main/docs/import.md#manually-converting--quantizing-models). Alternatively, you can download the 8bit quantized version that we created [ambersafe.Q8_0.gguf](https://huggingface.co/LLM360/AmberSafe/resolve/Q8_0/ambersafe.Q8_0.gguf?download=true)
2. Create an Ollama Modelfile locally using the template provided below:
```
FROM ambersafe.Q8_0.gguf
TEMPLATE """{{ .System }}
USER: {{ .Prompt }}
ASSISTANT:
"""
SYSTEM """A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
"""
PARAMETER stop "USER:"
PARAMETER stop "ASSISTANT:"
PARAMETER repeat_last_n 0
PARAMETER num_ctx 2048
PARAMETER seed 0
PARAMETER num_predict -1
```
Ensure that the FROM directive points to the created checkpoint file.
3. Now, you can proceed to build the model by running:
```bash
ollama create ambersafe -f Modelfile
```
4. To run the model from the command line, execute the following:
```bash
ollama run ambersafe
```
You need to build the model once and can just run it afterwards.
# Citation
**BibTeX:**
```bibtex
@misc{liu2023llm360,
title={LLM360: Towards Fully Transparent Open-Source LLMs},
author={Zhengzhong Liu and Aurick Qiao and Willie Neiswanger and Hongyi Wang and Bowen Tan and Tianhua Tao and Junbo Li and Yuqi Wang and Suqi Sun and Omkar Pangarkar and Richard Fan and Yi Gu and Victor Miller and Yonghao Zhuang and Guowei He and Haonan Li and Fajri Koto and Liping Tang and Nikhil Ranjan and Zhiqiang Shen and Xuguang Ren and Roberto Iriondo and Cun Mu and Zhiting Hu and Mark Schulze and Preslav Nakov and Tim Baldwin and Eric P. Xing},
year={2023},
eprint={2312.06550},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
dpfj/Itachi_Llama-3.1-8B
|
dpfj
| 2025-02-26T05:10:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-02-26T05:10:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Aryan-21/fft-sd15-id-6-e
|
Aryan-21
| 2025-02-26T05:09:30Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-02-26T05:09:24Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: eve
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# FFT_SD15_ID_6_E
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `eve` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
RichardErkhov/MrezaPRZ_-_CodeLlama-7B-postgres-expert-8bits
|
RichardErkhov
| 2025-02-26T05:09:25Z | 0 | 0 | null |
[
"safetensors",
"llama",
"arxiv:1910.09700",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-26T05:05:29Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
CodeLlama-7B-postgres-expert - bnb 8bits
- Model creator: https://huggingface.co/MrezaPRZ/
- Original model: https://huggingface.co/MrezaPRZ/CodeLlama-7B-postgres-expert/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Seshumalla212/studentchatbot
|
Seshumalla212
| 2025-02-26T05:09:04Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"aa",
"dataset:open-thoughts/OpenThoughts-114k",
"base_model:deepseek-ai/DeepSeek-R1",
"base_model:adapter:deepseek-ai/DeepSeek-R1",
"license:apache-2.0",
"region:us"
] | null | 2025-02-26T05:07:42Z |
---
license: apache-2.0
datasets:
- open-thoughts/OpenThoughts-114k
language:
- aa
metrics:
- accuracy
base_model:
- deepseek-ai/DeepSeek-R1
new_version: deepseek-ai/DeepSeek-R1
library_name: adapter-transformers
---
|
Lichang-Chen/Qwen2.5-14B-Instruct-star-nl-3Rounds-iter-1
|
Lichang-Chen
| 2025-02-26T05:08:53Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-25T04:23:16Z |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-14B-Instruct
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- alignment-handbook
- generated_from_trainer
model-index:
- name: Qwen2.5-14B-Instruct-star-nl-3Rounds-iter-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen2.5-14B-Instruct-star-nl-3Rounds-iter-1
This model is a fine-tuned version of [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.47.1
- Pytorch 2.4.0+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
nvidia/Llama-3.3-70B-Instruct-FP4
|
nvidia
| 2025-02-26T05:07:16Z | 6 | 4 | null |
[
"safetensors",
"llama",
"base_model:meta-llama/Llama-3.3-70B-Instruct",
"base_model:finetune:meta-llama/Llama-3.3-70B-Instruct",
"license:llama3.3",
"8-bit",
"region:us"
] | null | 2025-01-16T00:26:53Z |
---
base_model:
- meta-llama/Llama-3.3-70B-Instruct
license: llama3.3
---
# Model Overview
## Description:
The NVIDIA Llama 3.3 70B Instruct FP4 model is the quantized version of the Meta's Llama 3.3 70B Instruct model, which is an auto-regressive language model that uses an optimized transformer architecture. For more information, please check [here](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct). The NVIDIA Llama 3.3 70B Instruct FP4 model is quantized with [TensorRT Model Optimizer](https://github.com/NVIDIA/TensorRT-Model-Optimizer).
This model is ready for commercial/non-commercial use. <br>
## Third-Party Community Consideration
This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party’s requirements for this application and use case; see link to Non-NVIDIA [(Meta-Llama-3.3-70B-Instruct) Model Card](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct).
### License/Terms of Use:
[nvidia-open-model-license](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/)
[llama3.3](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE)
## Model Architecture:
**Architecture Type:** Transformers <br>
**Network Architecture:** Llama3.3 <br>
## Input:
**Input Type(s):** Text <br>
**Input Format(s):** String <br>
**Input Parameters:** 1D (One Dimensional): Sequences <br>
**Other Properties Related to Input:** Context length up to 128K <br>
## Output:
**Output Type(s):** Text <br>
**Output Format:** String <br>
**Output Parameters:** 1D (One Dimensional): Sequences <br>
**Other Properties Related to Output:** N/A <br>
## Software Integration:
**Supported Runtime Engine(s):** <br>
* Tensor(RT)-LLM <br>
**Supported Hardware Microarchitecture Compatibility:** <br>
* NVIDIA Blackwell <br>
**Preferred Operating System(s):** <br>
* Linux <br>
## Model Version(s):
The model is quantized with nvidia-modelopt **v0.23.0** <br>
## Datasets:
* Calibration Dataset: [cnn_dailymail](https://huggingface.co/datasets/abisee/cnn_dailymail) <br>
** Data collection method: Automated. <br>
** Labeling method: Unknown. <br>
## Inference:
**Engine:** Tensor(RT)-LLM <br>
**Test Hardware:** B200 <br>
## Post Training Quantization
This model was obtained by quantizing the weights and activations of Meta-Llama-3.3-70B-Instruct to FP4 data type, ready for inference with TensorRT-LLM. Only the weights and activations of the linear operators within transformers blocks are quantized. This optimization reduces the number of bits per parameter from 16 to 4, reducing the disk size and GPU memory requirements by approximately 3.3x.
## Usage
### Deploy with TensorRT-LLM
To deploy the quantized checkpoint with [TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM) LLM API, follow the sample codes below:
* LLM API sample usage:
```
from tensorrt_llm import LLM, SamplingParams
def main():
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI is",
]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="nvidia/Llama-3.3-70B-Instruct-FP4")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
# The entry point of the program need to be protected for spawning processes.
if __name__ == '__main__':
main()
```
Please refer to the [TensorRT-LLM llm-api documentation](https://nvidia.github.io/TensorRT-LLM/llm-api/index.html) for more details.
#### Evaluation
The accuracy benchmark results are presented in the table below:
<table>
<tr>
<td><strong>Precision</strong>
</td>
<td><strong>MMLU</strong>
</td>
<td><strong>GSM8K_COT</strong>
</td>
<td><strong>ARC Challenge</strong>
</td>
<td><strong>IFEVAL</strong>
</td>
</tr>
<tr>
<td>BF16
</td>
<td>83.3
</td>
<td>95.3
</td>
<td>93.7
</td>
<td>92.1
</td>
</tr>
<tr>
<td>FP4
</td>
<td>81.1
</td>
<td>92.6
</td>
<td>93.3
</td>
<td>92.0
</td>
</tr>
<tr>
</table>
## Ethical Considerations
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
|
jgillick/ppo-Huggy
|
jgillick
| 2025-02-26T05:06:00Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2025-02-26T05:00:30Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: jgillick/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
KingEmpire/Ain_14
|
KingEmpire
| 2025-02-26T05:05:51Z | 0 | 0 | null |
[
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-02-26T03:58:37Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
bonamt11/finetuned-Llama-3.2-1B-bnb-4bit
|
bonamt11
| 2025-02-26T05:04:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-02-26T05:04:48Z |
---
base_model: unsloth/llama-3.2-1b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** bonamt11
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
maashimho/tuned_for_project
|
maashimho
| 2025-02-26T05:04:48Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"mpnet",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:864",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:sentence-transformers/all-mpnet-base-v2",
"base_model:finetune:sentence-transformers/all-mpnet-base-v2",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-02-26T04:54:46Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:864
- loss:CosineSimilarityLoss
base_model: sentence-transformers/all-mpnet-base-v2
widget:
- source_sentence: "KEY COMPETENCIES â\x9C¶Multi - Operations Managementâ\x9C¶People\
\ Management â\x9C¶Customer Services - Emails â\x9C¶ MIS â\x9C¶Vendor & Client\
\ Services Managementâ\x9C¶Cross Functional Coordinationâ\x9C¶Banking & Financial\
\ Servicesâ\x9C¶ Transaction Monitoring * ATM Operations â\x9C¶ & Prepaid Card\
\ Operations (Pre-Issuance & Post-Issuance) â\x9C¶ POS Operations * JOB PROFILE\
\ & SKILLS: â\x80¢ An effective communicator with excellent relationship building\
\ & interpersonal skills. Strong analytical, problem solving & organizational\
\ abilities. â\x80¢ Extensive experience in managing operations with demonstrated\
\ leadership qualities & organisational skills during the tenure. â\x80¢ Managing\
\ customer centric operations & ensuring customer satisfaction by achieving service\
\ quality norms. â\x80¢ Analyzing of all operational problems, customer complaints\
\ and take preventive and corrective actions to resolve the same. â\x80¢ Receive\
\ and respond to Key customer inquiries in an effective manner and provide relevant\
\ and timely information. â\x80¢ Deft in steering banking back-end operations,\
\ analyzing risks and managing delinquencies with dexterity across applying techniques\
\ for maximizing recoveries and minimizing credit losses. â\x80¢ Analyzed & identified\
\ training needs of the team members and developing, organizing and conducting\
\ training programs and manage bottom quartile team to improve their performance.\
\ â\x80¢ Preparing and maintaining daily MIS reports to evaluate the performance\
\ and efficiency of the process relate to various verticals. â\x80¢ Measuring\
\ the performance of the processes in terms of efficiency and effectiveness matrix\
\ and ensuring adherence to SLA. â\x80¢ Major Activities Define processes for\
\ Field Services were monitored and necessary checks were executed and controlled.\
\ Also measured Vendor SLA by analyzing the TAT of vendors & the Client SLA provided\
\ to us. â\x80¢ As per company procedures, handling & ensuring vendor's payment\
\ issues to be sorted out &payments are processed on quarterly basis. â\x80¢ Appropriately\
\ plan and execute each skill of operations in accordance with the department's\
\ policies and procedures. â\x80¢ Manage relationships with business team, software\
\ development team and other services to achieve project objectives. Different\
\ software Worked till now: - a. CTL prime - Axis Bank Credit Cards b. Insight\
\ - For POS Machine technical operations for Amex (MID & TID Generation- ATOS\
\ (Venture Infotek) c. Ticket Management System - TATA Communications Private\
\ Services Ltd (ATM - NOC Operations) d. Branch Portal (Yalamanchili Software\
\ Exports Ltd) - Prepaid Cards (SBI Bank & Zaggle Prepaid Oceans Services Ltd)\
\ Zaggle Prepaid Ocean Services Pvt Ltd Oct, 2017 to Till Date Designation: Manager\
\ - Operations (Payment Industry - Prepaid Cards - INR) Education Details \r\n\
\ Commerce Mumbai, Maharashtra Mumbai University\r\nOperations Manager \r\n\r\
\nService Manager - Operations (Payment Industry - Prepaid Cards - INR & FTC)\r\
\nSkill Details \r\nOPERATIONS- Exprience - 73 months\r\nSATISFACTION- Exprience\
\ - 48 months\r\nTRAINING- Exprience - 24 months\r\nNOC- Exprience - 23 months\r\
\nPOINT OF SALE- Exprience - 20 monthsCompany Details \r\ncompany - Zaggle Prepaid\
\ Ocean Services Pvt Ltd\r\ndescription - Card Operations\r\ncompany - Yalamanchili\
\ Software Exports Ltd\r\ndescription - 24*7 Operations Pvt Ltd) Dec 2015 to Feb\
\ 2017\r\n\r\nDesignation: Service Manager - Operations (Payment Industry - Prepaid\
\ Cards - INR & FTC)\r\n\r\nKey Contributions: â\x80¢ A result-oriented business\
\ professional in planning, executing& managing processes, improving efficiency\
\ of operations, team building and detailing process information to determine\
\ effective result into operations.\r\nâ\x80¢ Ensuring PINs generation (SLA) is\
\ maintained and chargeback cases are raised in perfect timeframe.\r\nâ\x80¢ Managing\
\ email customer services properly and ensuring the emails are replied properly.\
\ Also, ensuring transaction monitoring is properly managed 24/7.\r\nâ\x80¢ Assisting\
\ Bankers (SBI & Associated Banks) for their BCP plans by getting executed in\
\ the system with the help of DR-PR plans & vice versa or any other business requirements.\r\
\nâ\x80¢ Expertise in maintaining highest level of quality in operations; ensuring\
\ adherence to all the quality parameters and procedures as per the stringent\
\ norms.\r\nâ\x80¢ Lead, manage and supervise the execution of external audit\
\ engagements and responsible for presenting the findings & developing a quality\
\ reports to the senior Management and Clients.\r\nâ\x80¢ Coach/mentor (20) team\
\ members to perform at a higher level by giving opportunities, providing timely\
\ continuous feedback and working with staff to improve their communication, time\
\ management, decision making, organization, and analytical skills.\r\nâ\x80¢\
\ Providing the solutions and services to the client in their own premises with\
\ aforesaid count of team members.\r\nâ\x80¢ Also ensuring end to end process\
\ of PR & DR as per client requirements (PR- DR & DR -PR) by interacting with\
\ internal & external stakeholders.\r\nâ\x80¢ Determining process gaps and designing\
\ & conducting training programs to enhance operational efficiency and retain\
\ talent by providing optimum opportunities for personal and professional growth.\r\
\ncompany - Credit Cards\r\ndescription - Ensured highest standard of customer\
\ satisfaction and quality service; developing new policies and procedures to\
\ improve based on customer feedback and resolving customer queries via correspondence,\
\ inbound calls & email channels with the strength of (12-16) Team members.\r\n\
company - AGS Transact Technologies Limited\r\ndescription - Key Contributions:\
\ Lead - SPOC to Banks\r\ncompany - TATA Communications Payment Solutions Ltd\r\
\ndescription - To make ATMs operational within TAT by analyzing the issue is\
\ technical or non-technical and also by interacting with internal & external\
\ stakeholders.\r\ncompany - Vertex Customer Solutions India Private Ltd\r\ndescription\
\ - Key Contributions: â\x80¢ Build positive working relationship with all team\
\ members and clients by keeping Management informed of KYC document collection\
\ & con-current audit progress, responding timely to Management inquiries, understanding\
\ the business and conducting self professionally.\r\ncompany - Financial Inclusion\
\ Network & Operations Limited\r\ndescription - Key Contributions: POS-Operations\
\ â\x80¢ Cascading the adherence of process is strictly followed by team members\
\ & training them to reduce the downtime.\r\nâ\x80¢ Managing Stock of EDC Terminals\
\ â\x80¢ Managing Deployments of terminals through Multiple teams â\x80¢ Would\
\ have worked with multiple terminal make & model â\x80¢ Managing Inward, Outward\
\ & QC of applications installed in the POS machines.\r\ncompany - Venture Infotek\
\ Private Ltd\r\ndescription - Key Contributions: POS-Operations\r\ncompany -\
\ Axis Bank Ltd - Customer Services\r\ndescription - Aug 2006 to Oct 2009 (Ma-Foi&I-\
\ smart)\r\n\r\nDesignation: Team Leader/Executive - Emails, Phone Banking & Correspondence\
\ Unit (Snail Mails)"
sentences:
- '⢠Responsible for & maintaining a high standard of customer service by providing
an excellent service experience and meeting the business objectives.
⢠Provide a fast, accurate and efficient service to the customer by responding
to customer enquiries promptly and accurately.
⢠Provide friendly and professional customer service to customers and other staff.
⢠Ensure that a high level of accuracy and customer service is always maintained.
⢠Ensure that customer service and customer requirements are met, & ensure that
customer''s expectations are met and exceeded.
⢠Ensure that customer service and customer requirements are met, & ensure that
customer''s expectations are met and exceeded.
⢠Maintain customer service systems and processes to ensure that all customer
queries and complaints are resolved on time.
⢠Ensure that customer information is up to date and that customer information
is maintained in the relevant format.
⢠Ensure that all customer procedures are followed and that customer data is
confidential.
⢠Ensure customer'''
- 'We are looking for an Electrical Engineer with 1 year of experience to join our
Solar Energy division in Bhopal, Madhya Pradesh. The division is responsible for
the design, installation, operation and maintenance of Solar Energy plants. The
candidate should have experience in the following areas:
1. Solar Power Plant Installation
2. Maintenance
Responsibilities:
- Design, installation and commissioning of solar plant.
- Maintaining Solar power plant''s operation and maintenance.
- Troubleshooting of solar panel and system.
- Analyzing electrical bills and ensuring energy efficiency.
- Providing technical support for solar panel installation and maintenance.
Requirements:
- Electrical Engineering Degree (BE / B.Tech)
- 1 year of experience
- Good knowledge of Solar Energy Plant Installation
- Good analytical skills.
- Good communication skills.
- Willing to work in shifts and on weekends.'
- 'We are looking for a self-motivated & result oriented Quality Engineer with experience
in the above mentioned areas. The role will involve:
1. Verifying & testing of PCBs (using multimeter, DSO, PC and other required instruments).
2. Working with 2D & 3D software like SolidWorks, CATIA, AutoCAD etc.
3. Preparation of drawings & drafting of mechanical parts.
4. Verifying & testing of PCB assemblies (using multimeter, DSO, PC and other
required instruments).
5. Knowledge of CAD software like Solidworks, CATIA, etc.
6. Knowledge of drafting & machining techniques.
7. Knowledge of quality processes/stability analysis.
8. Knowledge of design for reliability.
9. Knowledge of design for manufacturing (DFM)
10. Knowledge of product quality processes.
11. Knowledge of design for manufacturing (DFM)'
- source_sentence: "SKILLS Bitcoin, Ethereum Solidity Hyperledger, Beginner Go, Beginner\
\ R3 Corda, Beginner Tendermint, Nodejs, C Programming, Java, Machine Learning\
\ specilaized in Brain Computer Interface, Computer Networking and Server Admin,\
\ Computer Vision, Data Analytics, Cloud Computing, Reactjs, angularEducation\
\ Details \r\nJanuary 2014 to January 2018 Bachelor of Engineering Computer Science\
\ & Engineering Thakur College of Engineering and Technology\r\nSeptember 2016\
\ to March 2017 Dynamic Blood Bank System Mumbai, Maharashtra IIT\r\nJanuary\
\ 2014 CBSE Senior Secondary\r\nJanuary 2011 CBSE Banking VIDYASHRAM PUBLIC\
\ SCHOOL\r\nBlockchain Developer \r\n\r\nBlockchain Developer - Zhypility Technologies\r\
\nSkill Details \r\nNETWORKING- Exprience - 27 months\r\nDATA ANALYTICS- Exprience\
\ - 11 months\r\nCOMPUTER VISION- Exprience - 6 months\r\nJAVA- Exprience - 6\
\ months\r\nMACHINE LEARNING- Exprience - 6 monthsCompany Details \r\ncompany\
\ - Zhypility Technologies\r\ndescription - une 2018\r\ncompany - Area Business\
\ Owner Amway Enterprise Limited\r\ndescription - Business Strategizing Promotion,\
\ Analytics and Networking Terms\r\ncompany - Virtual\r\ndescription - Developing\
\ Prototype of Smart India Hackthon to deployment level.\r\n3.Networking And Switch\
\ Intern Bharti Airtel Private Limited (Mumbai)\r\ncompany - 1.International Research\
\ Scholar- University Of Rome, Tor Vergata (Rome)\r\ndescription - Nov 2017 -\
\ Nov 2017\r\nHas done research on Reality Based Brain computer Interface and\
\ proposed paper in International Journal of Advanced Research (IJAR-20656) accepted\
\ paper by reviewer and Smart Kisan -Revolutionizing Country -IJSRD accepted for\
\ publication\r\ncompany - \r\ndescription - under Reliance Jio (Mumbai) Dec 2017\
\ - Jan 2017\r\ncompany - Maharastra State Government Hackthon\r\ndescription\
\ - \r\ncompany - Virtual\r\ndescription - I was handling group of Interns in\
\ the marketing and sales team of nearby to promote on all social media platform\
\ the nearby products.\r\ncompany - Promotion And Stock Marketing Drums Foods\
\ International\r\ndescription - \r\ncompany - 8.Data Science And Web Analytics\
\ POSITRON INTERNET (Virtual)\r\ndescription - \r\ncompany - \r\ndescription -\
\ I was making people aware about women equality rights and raise voice against\
\ violence through various modes of events and other sources of media to help\
\ the society.\r\ncompany - IIT Bombay And IIT KGP Startup\r\ndescription - \r\
\ncompany - IIT Bombay And IIT KGP Startup\r\ndescription - "
sentences:
- 'We are looking for a Blockchain Developer with experience in Hyperledger Fabric
to join our product development team.
The developer will be responsible for building and maintaining the infrastructure
and services required for the Hyperledger Fabric blockchain. The developer will
be required to develop the core components of the Hyperledger Fabric, such as
the consensus algorithms, client libraries, and transaction processing.
The developer will also be required to build the Hyperledger Fabric node and perform
the node registration, configuration and startup.
The developer will be able to design and develop the Hyperledger Fabric node,
perform continuous integration and system testing.
The developer will also be required to build a Hyperledger Fabric client application,
which will be required to communicate with the Hyperledger Fabric node.
The developer will be required to develop various applications on the Hyperledger
Fabric platform, such as smart contracts, data and identity management, and application
development. The developer will be required to design and develop'
- 'â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â¦
â'
- "1. Java Web Developer. \n2. PHP developer. \n3..Net developer.\n\nWe are looking\
\ for a candidate who can work independently and handle multiple projects. \n\n\
The candidate must have:\n1. Strong coding skills in Java, JSP, and Spring.\n\
2. Experience in database design and SQL queries.\n3. Good communication skills\
\ to collaborate with a team.\n4. Knowledge of HTML, CSS, and JavaScript.\n5.\
\ Knowledge of design patterns and development best practices.\n\nWe offer benefits\
\ package including salary, holidays, medical, and other allowances.\n\nIf you\
\ are interested in this position, please email your resume to [email protected]"
- source_sentence: "SOFTWARE SKILLS: Languages: C, C++ & java Operating Systems: Windows\
\ XP, 7, Ubuntu RDBMS: Oracle (SQL) Database, My SQL, PostgreSQL Markup & Scripting:\
\ HTML, JavaScript & PHP, CSS, JQuery, Angular js. Framework: Struts, Hibernate,\
\ spring, MVC Web Server: Tomcat and Glassfish. Web Services: REST AND SOAP TRAINING\
\ DETAIL Duration: 4 months From: - United Telecommunication Limited Jharnet project\
\ (Place - Ranchi, Jharkhand) Networking Requirements: Elementary configuration\
\ of router and switch, IP and MAC addressing, Lease Line, OSI Layers, Routing\
\ protocols. Status: - Network Designer.Education Details \r\n 2 High School\r\
\n Diploma Government Women Ranchi, Jharkhand The Institution\r\nBlockchain Engineer\
\ \r\n\r\nBlockchain Engineer - Auxledger\r\nSkill Details \r\nJAVA- Exprience\
\ - 19 months\r\nCSS- Exprience - 12 months\r\nHTML- Exprience - 12 months\r\n\
JAVASCRIPT- Exprience - 12 months\r\nC++- Exprience - 6 monthsCompany Details\
\ \r\ncompany - Auxledger\r\ndescription - Worked with on lots of product on blockchain.\r\
\n\r\nâ\x80¢ Bitcoin: Build Wallet and explorer on Bitcoin\r\nâ\x80¢ Ethereum:\
\ Build Wallet and explorer on ethereum blockchain.\r\nâ\x80¢ Customize product\
\ on Ethereum: Inventory system (Build smart contract in solidity,\r\ndeployed\
\ in java byte code and on ethereum as well and I have written API in java spring\
\ on that and then build front end and called all Api)\r\nâ\x80¢ Audit Logger:\
\ I have audit logger for OTC exchange to keep all transaction record in blockchain.\r\
\nâ\x80¢ DOC Safe on ethereum: I have build an ethereum application to keep Documents\
\ safe on blockchain and document in encrypted form on server.\r\nâ\x80¢ And explore\
\ with Litecoin, Ripple & exchange (OTC P2P) Hyperledger Fabric ..continue \
\ ..\r\ncompany - \r\ndescription - Worked with a USA team on blockchain on ethereum,\
\ I have designed product on ethereum\r\nblockchain,\r\nâ\x80¢ Setup private ethereum\
\ and bitcoin blockchain. Worked on loyalty program system and HER\r\nSystem on\
\ ethereum network.\r\ncompany - ERP System, CRM for Real Estate Company\r\ndescription\
\ - â\x80¢ At Lavisa Infrastructure Bangalore \
\ Sep 2015- Oct 2016\r\nSoftware developer\r\nâ\x80¢ ERP System, CRM for\
\ Real Estate Company.\r\ncompany - News Portal\r\ndescription - â\x80¢ On demand\
\ product development from client side requirement. Like\r\nâ\x80¢ Dynamic website:\
\ Content management system where I have designed front end with backend where\
\ content of website was manageable from admin panel.\r\nâ\x80¢ News Portal: News\
\ portal where content was in Hindi language. I have used Html, Css,\r\nJavaScript,\
\ JDBC, MySQL data base.\r\nâ\x80¢ Birthday Reminder: A small web application\
\ for birthday reminder, I have used HTMl, CSS,\r\nJavaScript, JDBC, MySQL DB.\r\
\nâ\x80¢ Car parking System: A web application for Management of Car Parking System,\
\ I have used\r\nHTMl, CSS, JavaScript, JDBC, MySQL DB.\r\ncompany - Company portal\
\ for employee management for Inside Company\r\ndescription - â\x80¢ At United\
\ Telecom Limited Ranchi Nov 2013-Sep\
\ 2014\r\nWeb developer\r\nâ\x80¢ Company portal for employee management for Inside\
\ Company. Onsite employee, & in different-different district. And management\
\ of all kind of government service like adhar\r\ncard, Birth certificate, pan\
\ card tracker etc.\r\n\r\nTechnology skill:\r\n\r\nTechnology: Blockchain (Bitcoin,\
\ Ethereum, Ripple, Hyperledger Fabric)\r\nBlock-chain: Private setup of blockchain,\
\ Node building.\r\nSmart Contract: Solidity Language.\r\nSmart Contract Api:\
\ Java Spring\r\nDapp Building: Node js, React js, Express js"
sentences:
- 'We are looking for an experienced Full Stack Developer to handle our website
development based in the United States. The candidate should have strong experience
working with Node.js, React, and MongoDB.
The ideal candidate should possess a solid understanding of the full-stack development
process, including gathering customer needs, coding, testing, and deployment.
Responsibilities:
- Develops and maintains the website, ensuring it is well-structured and responsive
to all users.
- Collaborate on the website design and layout, including the look and feel of
the website.
- Optimize and enhance web content to improve user experience.
- Troubleshoot issues and resolve problems to ensure website uptime.
- Develop and maintain the website in accordance with the latest industry trends
and best practices.
- Collaborate with the front-end development team to ensure the website is optimized
for all devices.
- Code, test, and maintain the user interface, including Javascript, CSS, and
HTML'
- 'The company is looking for a software developer who has a minimum of 2 years
of experience in
blockchain development, specifically with the Solidity language. The candidate
should also have
experience in smart contract development and dapp building. The job requires that
the developer
should also have experience in setting up and managing a private blockchain from
scratch.
The candidate should have a thorough understanding of the blockchain architecture,
mining,
wallet systems, chaincode development, and smart contract development. The experience
should be
demonstrated through relevant projects and examples.
The candidate should also have experience in deploying and maintaining blockchain
applications, and developing dapps. The candidate should also be familiar with
coding
standards and security best practices.
The candidate should have experience in working with databases and designing,
maintaining,
and upgrading them.
The candidate should also have experience in setting up and maintaining a Solidity
environment. The candidate should have experience in integrating'
- '- Responsible for the implementation and maintenance of data warehousing environment
- Worked with different data sources that includes mainframe (SAP) databases and
various SQL databases
- Worked with SAP Data Services, SAP SQL Server 2008/2012, SAP Business Objects
Analysis services, SAP Business Objects Dashboard design and SAP Business Objects
Data Visualization.
- Worked with SAP HANA, SAP Business Objects Analysis Services, SAP Business Objects
Dashboard design, SAP Business Objects Data Visualization and SAP Business Objects
Data Services
- Worked with SAP Data Services (ODS)
- Worked with SAP SQL Server 2008/2012, SAP Business Objects Analysis services,
SAP Business Objects Dashboard design, SAP Business Objects Data Visualization
and SAP Business Objects Data Services.
- Worked with SAP Business Objects Analysis services, SAP Business Objects Dashboard
design, SAP Business Objects Data Visualization and SAP Business Objects Data
Services.
- Worked with SAP HANA, SAP Business'
- source_sentence: "Computer Skills: â\x80¢ Proficient in MS office (Word, Basic Excel,\
\ Power point) Strength: â\x80¢ Hard working, Loyalty & Creativity â\x80¢ Self-motivated,\
\ Responsible & Initiative â\x80¢ Good people management skill & positive attitude.\
\ â\x80¢ knowledge of windows, Internet.Education Details \r\n Bachelor of Electrical\
\ Engineering Electrical Engineering Nashik, Maharashtra Guru Gobind Singh College\
\ of Engineering and Research Centre\r\n Diploma Electrical Engineering Nashik,\
\ Maharashtra S. M. E. S. Polytechnic College\r\nTesting Engineer \r\n\r\n\r\n\
Skill Details \r\nEXCEL- Exprience - 6 months\r\nMS OFFICE- Exprience - 6 months\r\
\nWORD- Exprience - 6 monthsCompany Details \r\ncompany - \r\ndescription - Department:\
\ Testing\r\n\r\nResponsibilities: â\x80¢ To check ACB and VCB of Circuit Breaker.\r\
\nâ\x80¢ Following test conducted of Circuit Breaker as per drawing.\r\n1. To\
\ check breaker timing.\r\n2. To check contact resistance using contact resistance\
\ meter (CRM) 3. To check breaker insulation resistance (IR) 4. To check breaker\
\ rack out and rack in properly or not.\r\n5. To check closing and tripping operation\
\ work properly or not.\r\nâ\x80¢ To check and following test conducted in MCC\
\ & PCC panel.\r\n1. Insulation Resistance (IR) test.\r\n2. Contact Resistance\
\ (CRM) test.\r\n3. To check connection on mcc & pcc panel as per drawing.\r\n\
â\x80¢ To check and following test conducted in transformer.\r\n1. Insulation\
\ Resistance (IR) test.\r\n2. Transformer Ratio test.\r\n3. Transformer Vector\
\ Group test.\r\n4. Magnetic Balance test.\r\n5. Magnetic Current test.\r\n6.\
\ To check the transformer tapping remotely as well as manually 7. To check the\
\ all alarm and tripping protection command work properly\r\nOr not as per circuit\
\ diagram.\r\n â\x80¢ To check and test conducted in HV cables.\r\n1. Hi-Pot test.\r\
\n2. Insulation resistance (IR) test.\r\nâ\x80¢ To check the LV cables using megger\
\ (IR Test) â\x80¢ To check the relay connections as per circuit diagram.\r\n\
Create the defects list which arising during the testing and try to find the solution\
\ to minimize the problem.\r\ncompany - TRANS POWER SOLUTIONS\r\ndescription -\
\ Lake-Site CO-Op.Soc. Adi Shankaracharya Marg,\r\nOpp. IIT Main Gate, Powai 400076."
sentences:
- "We are looking for a competent and experienced Testing Engineer to join our team.\
\ The primary responsibility of the Testing Engineer is to test, maintain and\
\ troubleshoot the electrical systems to ensure that the performance of the products\
\ matches the specifications.\n\nThe ideal candidate should have experience in\
\ the following areas: \n\n1. Testing of electrical systems\n2. Troubleshooting\
\ of electrical systems\n3. Electrical panel inspection\n4. Follow-up with the\
\ client\n5. Test the electrical product using the necessary tools\n\nQualification\
\ Required: \nB.E./B.Tech. (Electrical/ Electronics) or M.B.A.\nKnowledge Required:\
\ \nKnowledge of Windows, Internet, MS Office, and other relevant software/tools.\n\
Experience: \n2-3 years of experience in a relevant field.\nSalary Details: \n\
The candidate will be paid according to industry standards.\nWorking Location:\
\ \nPowai, Mumbai."
- 'We are looking for a Java Developer who can build applications and services using
Java. The developer should have experience in all the core Java technologies like
JDBC, Swing, JDBC, J2EE, and JavaScript/jQuery.
The candidate would need to have a strong technical background in Java and should
have experience in a variety of Java frameworks and technologies. The responsibilities
for the Java Developer include developing and testing various applications, frameworks
and tools. The developer would also be responsible for troubleshooting and resolving
technical issues.
The candidate should be comfortable working with JavaScript/jQuery to help build
and maintain dynamic user interfaces. The developer will also be required to document
code and participate in team meetings. The candidate should have experience in
software development life cycle, coding and testing techniques.
The candidate should have a basic understanding of Java, JavaScript, Swing and
JDBC. They should also have experience in web application development with Java.
The candidate should also be able to work independently with limited supervision'
- 'A position in a large Financial Services company to lead a small team and be
responsible for the following activities: ·     Â           Â
                                      Â
                                      Â
                                      Â
                                      Â
  Â'
- source_sentence: "TechnicalProficiencies DB: Oracle 11g Domains: Investment Banking,\
\ Advertising, Insurance. Programming Skills: SQL, PLSQL BI Tools: Informatica\
\ 9.1 OS: Windows, Unix Professional Development Trainings â\x80¢ Concepts in\
\ Data Warehousing, Business Intelligence, ETL. â\x80¢ BI Tools -Informatica 9X\
\ Education Details \r\n BCA Nanded, Maharashtra Nanded University\r\nETL Developer\
\ \r\n\r\nETL Developer - Sun Trust Bank NY\r\nSkill Details \r\nETL- Exprience\
\ - 39 months\r\nEXTRACT, TRANSFORM, AND LOAD- Exprience - 39 months\r\nINFORMATICA-\
\ Exprience - 39 months\r\nORACLE- Exprience - 39 months\r\nUNIX- Exprience -\
\ 39 monthsCompany Details \r\ncompany - Sun Trust Bank NY\r\ndescription - Sun\
\ Trust Bank, NY JAN 2018 to present\r\nClient: Sun Trust Bank NY\r\nEnvironment:\
\ Informatica Power Center 9.1, Oracle 11g, unix.\r\n\r\nRole: ETL Developer\r\
\n\r\nProject Profile:\r\nSun Trust Bank is a US based multinational financial\
\ services holding company, headquarters in NY that operates the Bank in New York\
\ and other financial services investments. The company is organized as a stock\
\ corporation with four divisions: investment banking, private banking, Retail\
\ banking and a shared services group that provides\r\nFinancial services and\
\ support to the other divisions.\r\nThe objective of the first module was to\
\ create a DR system for the bank with a central point of communication and storage\
\ for Listed, Cash securities, Loans, Bonds, Notes, Equities, Rates, Commodities,\
\ and\r\nFX asset classes.\r\nContribution / Highlights:\r\n\r\nâ\x80¢ Liaising\
\ closely with Project Manager, Business Analysts, Product Architects, and Requirements\
\ Modelers (CFOC) to define Technical requirements and create project documentation.\r\
\nâ\x80¢ Development using Infa 9.1, 11g/Oracle, UNIX.\r\nâ\x80¢ Use Informatica\
\ PowerCenter for extraction, transformation and loading (ETL) of data in the\
\ Database.\r\nâ\x80¢ Created and configured Sessions in Informatica workflow\
\ Manager for loading data into Data base tables from various heterogeneous database\
\ sources like Flat Files, Oracle etc.\r\nâ\x80¢ Unit testing and system integration\
\ testing of the developed mappings.\r\nâ\x80¢ Providing production Support of\
\ the deployed code.\r\nâ\x80¢ Providing solutions to the business for the Production\
\ issues.\r\nâ\x80¢ Had one to One interaction with the client throughout the\
\ project and in daily meetings.\r\n\r\nProject #2\r\ncompany - Marshall Multimedia\r\
\ndescription - JUN 2016 to DEC 2017\r\n\r\nClient: Marshall Multimedia\r\nEnvironment:\
\ Informatica Power Center 9.1, Oracle 11g, unix.\r\n\r\nRole: ETL Developer\r\
\n\r\nProject Profile:\r\nMarshall Multimedia is a US based multimedia advertisement\
\ services based organization which has\r\nhead courter in New York. EGC interface\
\ systems are advert management, Customer Management, Billing and\r\nProvisioning\
\ Systems for Consumer& Enterprise Customers.\r\nThe main aim of the project was\
\ to create an enterprise data warehouse which would suffice the need of reports\
\ belonging to the following categories: Financial reports, management reports\
\ and\r\nrejection reports. The professional reports were created by Cognos and\
\ ETL work was performed by\r\nInformatica. This project is to load the advert\
\ details and magazine details coming in Relational tables into data warehouse\
\ and calculate the compensation and incentive amount monthly twice as per business\r\
\nrules.\r\n\r\nContribution / Highlights:\r\nâ\x80¢ Developed mappings using\
\ different sources by using Informatica transformations.\r\nâ\x80¢ Created and\
\ configured Sessions in Informatica workflow Manager for loading data into Data\
\ Mart tables from various heterogeneous database sources like Flat Files, Oracle\
\ etc.\r\n\r\n2\r\nâ\x80¢ Unit testing and system integration testing of the developed\
\ mappings.\r\nâ\x80¢ Providing solutions to the business for the Production issues.\r\
\n\r\nProject #3\r\ncompany - Assurant healthcare/Insurance Miami USA\r\ndescription\
\ - Assurant, USA \
\ NOV 2015 to MAY 2016\r\n\r\nProject:\
\ ACT BI - State Datamart\r\nClient: Assurant healthcare/Insurance Miami USA\r\
\nEnvironment: Informatica Power Center 9.1, Oracle 11g, unix.\r\n\r\nRole: ETL\
\ Developer\r\n\r\nProject Profile:\r\nAssurant, Inc. is a holding company with\
\ businesses that provide a diverse set of specialty, niche-market insurance\r\
\nproducts in the property, casualty, life and health insurance sectors. The company's\
\ four operating segments are Assurant\r\nEmployee Benefits, Assurant Health,\
\ Assurant Solutions and Assurant Specialty Property.\r\nThe project aim at building\
\ State Datamart for enterprise solution. I am part of team which is responsible\
\ for ETL\r\nDesign & development along with testing.\r\n\r\nContribution / Highlights:\r\
\nâ\x80¢ Performed small enhancement\r\nâ\x80¢ Daily load monitoring\r\nâ\x80\
¢ Attend to Informatica job failures by analyzing the root cause, resolving\
\ the failure using standard\r\ndocumented process.\r\nâ\x80¢ Experience in\
\ writing SQL statements.\r\nâ\x80¢ Strong Problem Analysis & Resolution skills\
\ and ability to work in Multi Platform Environments\r\nâ\x80¢ Scheduled the\
\ Informatica jobs using Informatica scheduler\r\nâ\x80¢ Extensively used ETL\
\ methodology for developing and supporting data extraction, transformations and\
\ loading process, in a corporate-wide-ETL Solution using Informatica.\r\nâ\x80\
¢ Involved in creating the Unit cases and uploaded in to Quality Center for\
\ Unit Testing and UTR\r\nâ\x80¢ Ensure that daily support tasks are done in\
\ accordance with the defined SLA."
sentences:
- 'The incumbent would be responsible for testing and maintenance of the Transformers,
BPCB''s, Transformer, PCC, MCC, HV cables, LV cables with respect to the electrical
and mechanical aspects.
Job Requirements:
- B.E. / B.Tech. (Electrical/Mechanical) with minimum 60% aggregate.
- Minimum 2 years of experience in testing and maintenance of transformers, BPCB''s,
Transformer, PCC, HV cables, LV cables.
- Knowledge of transformer ratio test, transformer vector group test, transformer
magnetic balance test, transformer tripping protection command, etc.
- Knowledge of working of electrical/mechanical systems and related components
(like motors, starters, etc.)
- Knowledge of electrical/mechanical maintenance of transformers etc.
- Ability to check transformer/MCC/PCC/HV cables/LV cables for defects and to
work on them to fix'
- "â\x80¢ Knowledge of Informatica Power Center (ver. 9.1 and 10) ETL Tool: Mapping\
\ designing, usage of multiple transformations. Integration of various data source\
\ like SQL Server tables, Flat Files, etc. into target data warehouse.\r\nâ\x80\
¢ SQL/PLSQL working knowledge on Microsoft SQL server 2010.\r\nâ\x80¢ Unix Working\
\ Description on Microsoft SQL server 2010.\r\nâ\x80¢ Job scheduling using Autosys,\
\ Incident management and Change Requests through Service Now, JIRA, Agile Central.\
\ Education Details:\r\nâ\x80¢ BTech CSE Sangli, Maharashtra: Walchand College\
\ of Engineering\r\nâ\x80¢ H.S.C Sangli, Maharashtra: Willingdon College\r\nâ\x80\
¢ 2 years of experience in ETL Development."
- I am looking for an opportunity that would provide me with a chance to learn and
enhance my skills in the Oracle Financials domain. I have 4+ years of experience
in the domain and have worked with various clients. I have been working in the
finance domain for 9+ years. I have worked in Oracle Apps Financials and have
experience in Oracle Financials 11i, R12. I am also proficient in Financial Services
• ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢
¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢
¢ ¢ ¢
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
model-index:
- name: SentenceTransformer based on sentence-transformers/all-mpnet-base-v2
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: validation
type: validation
metrics:
- type: pearson_cosine
value: 0.8836967964163955
name: Pearson Cosine
- type: spearman_cosine
value: 0.8723963812329054
name: Spearman Cosine
---
# SentenceTransformer based on sentence-transformers/all-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) <!-- at revision 9a3225965996d404b775526de6dbfe85d3368642 -->
- **Maximum Sequence Length:** 384 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
"TechnicalProficiencies DB: Oracle 11g Domains: Investment Banking, Advertising, Insurance. Programming Skills: SQL, PLSQL BI Tools: Informatica 9.1 OS: Windows, Unix Professional Development Trainings â\x80¢ Concepts in Data Warehousing, Business Intelligence, ETL. â\x80¢ BI Tools -Informatica 9X Education Details \r\n BCA Nanded, Maharashtra Nanded University\r\nETL Developer \r\n\r\nETL Developer - Sun Trust Bank NY\r\nSkill Details \r\nETL- Exprience - 39 months\r\nEXTRACT, TRANSFORM, AND LOAD- Exprience - 39 months\r\nINFORMATICA- Exprience - 39 months\r\nORACLE- Exprience - 39 months\r\nUNIX- Exprience - 39 monthsCompany Details \r\ncompany - Sun Trust Bank NY\r\ndescription - Sun Trust Bank, NY JAN 2018 to present\r\nClient: Sun Trust Bank NY\r\nEnvironment: Informatica Power Center 9.1, Oracle 11g, unix.\r\n\r\nRole: ETL Developer\r\n\r\nProject Profile:\r\nSun Trust Bank is a US based multinational financial services holding company, headquarters in NY that operates the Bank in New York and other financial services investments. The company is organized as a stock corporation with four divisions: investment banking, private banking, Retail banking and a shared services group that provides\r\nFinancial services and support to the other divisions.\r\nThe objective of the first module was to create a DR system for the bank with a central point of communication and storage for Listed, Cash securities, Loans, Bonds, Notes, Equities, Rates, Commodities, and\r\nFX asset classes.\r\nContribution / Highlights:\r\n\r\nâ\x80¢ Liaising closely with Project Manager, Business Analysts, Product Architects, and Requirements Modelers (CFOC) to define Technical requirements and create project documentation.\r\nâ\x80¢ Development using Infa 9.1, 11g/Oracle, UNIX.\r\nâ\x80¢ Use Informatica PowerCenter for extraction, transformation and loading (ETL) of data in the Database.\r\nâ\x80¢ Created and configured Sessions in Informatica workflow Manager for loading data into Data base tables from various heterogeneous database sources like Flat Files, Oracle etc.\r\nâ\x80¢ Unit testing and system integration testing of the developed mappings.\r\nâ\x80¢ Providing production Support of the deployed code.\r\nâ\x80¢ Providing solutions to the business for the Production issues.\r\nâ\x80¢ Had one to One interaction with the client throughout the project and in daily meetings.\r\n\r\nProject #2\r\ncompany - Marshall Multimedia\r\ndescription - JUN 2016 to DEC 2017\r\n\r\nClient: Marshall Multimedia\r\nEnvironment: Informatica Power Center 9.1, Oracle 11g, unix.\r\n\r\nRole: ETL Developer\r\n\r\nProject Profile:\r\nMarshall Multimedia is a US based multimedia advertisement services based organization which has\r\nhead courter in New York. EGC interface systems are advert management, Customer Management, Billing and\r\nProvisioning Systems for Consumer& Enterprise Customers.\r\nThe main aim of the project was to create an enterprise data warehouse which would suffice the need of reports belonging to the following categories: Financial reports, management reports and\r\nrejection reports. The professional reports were created by Cognos and ETL work was performed by\r\nInformatica. This project is to load the advert details and magazine details coming in Relational tables into data warehouse and calculate the compensation and incentive amount monthly twice as per business\r\nrules.\r\n\r\nContribution / Highlights:\r\nâ\x80¢ Developed mappings using different sources by using Informatica transformations.\r\nâ\x80¢ Created and configured Sessions in Informatica workflow Manager for loading data into Data Mart tables from various heterogeneous database sources like Flat Files, Oracle etc.\r\n\r\n2\r\nâ\x80¢ Unit testing and system integration testing of the developed mappings.\r\nâ\x80¢ Providing solutions to the business for the Production issues.\r\n\r\nProject #3\r\ncompany - Assurant healthcare/Insurance Miami USA\r\ndescription - Assurant, USA NOV 2015 to MAY 2016\r\n\r\nProject: ACT BI - State Datamart\r\nClient: Assurant healthcare/Insurance Miami USA\r\nEnvironment: Informatica Power Center 9.1, Oracle 11g, unix.\r\n\r\nRole: ETL Developer\r\n\r\nProject Profile:\r\nAssurant, Inc. is a holding company with businesses that provide a diverse set of specialty, niche-market insurance\r\nproducts in the property, casualty, life and health insurance sectors. The company's four operating segments are Assurant\r\nEmployee Benefits, Assurant Health, Assurant Solutions and Assurant Specialty Property.\r\nThe project aim at building State Datamart for enterprise solution. I am part of team which is responsible for ETL\r\nDesign & development along with testing.\r\n\r\nContribution / Highlights:\r\nâ\x80¢ Performed small enhancement\r\nâ\x80¢ Daily load monitoring\r\nâ\x80¢ Attend to Informatica job failures by analyzing the root cause, resolving the failure using standard\r\ndocumented process.\r\nâ\x80¢ Experience in writing SQL statements.\r\nâ\x80¢ Strong Problem Analysis & Resolution skills and ability to work in Multi Platform Environments\r\nâ\x80¢ Scheduled the Informatica jobs using Informatica scheduler\r\nâ\x80¢ Extensively used ETL methodology for developing and supporting data extraction, transformations and loading process, in a corporate-wide-ETL Solution using Informatica.\r\nâ\x80¢ Involved in creating the Unit cases and uploaded in to Quality Center for Unit Testing and UTR\r\nâ\x80¢ Ensure that daily support tasks are done in accordance with the defined SLA.",
'I am looking for an opportunity that would provide me with a chance to learn and enhance my skills in the Oracle Financials domain. I have 4+ years of experience in the domain and have worked with various clients. I have been working in the finance domain for 9+ years. I have worked in Oracle Apps Financials and have experience in Oracle Financials 11i, R12. I am also proficient in Financial Services • ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢',
"The incumbent would be responsible for testing and maintenance of the Transformers, BPCB's, Transformer, PCC, MCC, HV cables, LV cables with respect to the electrical and mechanical aspects.\n\nJob Requirements:\n- B.E. / B.Tech. (Electrical/Mechanical) with minimum 60% aggregate.\n- Minimum 2 years of experience in testing and maintenance of transformers, BPCB's, Transformer, PCC, HV cables, LV cables.\n- Knowledge of transformer ratio test, transformer vector group test, transformer magnetic balance test, transformer tripping protection command, etc.\n- Knowledge of working of electrical/mechanical systems and related components (like motors, starters, etc.)\n- Knowledge of electrical/mechanical maintenance of transformers etc.\n- Ability to check transformer/MCC/PCC/HV cables/LV cables for defects and to work on them to fix",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `validation`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8837 |
| **spearman_cosine** | **0.8724** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 864 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 864 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 24 tokens</li><li>mean: 316.25 tokens</li><li>max: 384 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 164.37 tokens</li><li>max: 218 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.56</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------|
| <code>KEY SKILLS: ⢠Computerized accounting with tally ⢠Sincere & hard working ⢠Management accounting & income tax ⢠Good communication & leadership ⢠Two and four wheeler driving license ⢠Internet & Ecommerce management COMPUTER SKILLS: ⢠C Language ⢠Web programing ⢠Tally ⢠Dbms Education Details
<br>June 2017 to June 2019 Mba Finance/hr India Mlrit
<br>June 2014 to June 2017 Bcom Computer Hyderabad, Telangana Osmania university
<br>June 2012 to April 2014 Inter MEC India Srimedhav
<br>Hr
<br>
<br>Nani
<br>Skill Details
<br>accounting- Exprience - 6 months
<br>DATABASE MANAGEMENT SYSTEM- Exprience - 6 months
<br>Dbms- Exprience - 6 months
<br>Management accounting- Exprience - 6 months
<br>Ecommerce- Exprience - 6 monthsCompany Details
<br>company - Valuelabs
<br>description - They will give the RRF form the required DLT then the hand over to RLT then scrum master will take the form from the RLT then scrum master will give the forms to trainee which we can work on the requirement till the candidate rece...</code> | <code>We are looking for a hardworking and self-motivated candidate who can implement strategies to maximize sales. Key responsibilities will include: <br><br>1. Sales and Customer Service: <br>Identify and develop new customers and maintain a successful relationship with them. Develop sales strategies and objectives and work with the marketing team to ensure that sales are achieved. Coordinate sales efforts with the customer service team. <br>2. Sales Administration:<br>Coordinate sales with administrative functions and maintain records. Conducting market research and analyzing data. Prepare sales forecasts and reports. <br>3. Business Management:<br>Manage customer service team, sales team and marketing team to ensure sales and customer satisfaction are met. Develop a business strategy to achieve a competitive advantage in the marketplace. <br>4. Sales Promotion:<br>Develop, maintain and execute sales promotion plans. <br>5. Sales Analysis:<br>Analyze sales performance and develop sales strategies and objectives.<br><br>Key Ski...</code> | <code>0.5287528648371803</code> |
| <code>IT SKILLS ⢠Well versed with MS Office and Internet Applications and various ERP systems implemented in the company ie.SAGE, Flotilla, LM ERP, Tally 9, WMS, Exceed 4000 etc PERSONAL DOSSIER Permanent Address: Bandra West, Mumbai 400 050Education Details
<br> B.Com commerce Mumbai, Maharashtra Bombay University
<br> Mumbai, Maharashtra St. Andrews College
<br> DIM Business Management IGNOU
<br>Operations Manager
<br>
<br>Operations Manager - Landmark Insurance Brokers Pvt Ltd
<br>Skill Details
<br>EMPLOYEE RESOURCE GROUP- Exprience - 6 months
<br>ENTERPRISE RESOURCE PLANNING- Exprience - 6 months
<br>ERP- Exprience - 6 months
<br>MS OFFICE- Exprience - 6 months
<br>Tally- Exprience - 6 monthsCompany Details
<br>company - Landmark Insurance Brokers Pvt Ltd
<br>description - Jan 2019 till Date
<br>About the Company
<br>One of India Largest Insurance Brokerage firms with offices across 24 states PAN India and a part of the LandmarkGroup with an annual turnover of 2200 cr
<br>
<br>Position: Operations Manager
<br>Leading and overseeing a...</code> | <code>⢠A company with a very strong reputation for a high performance culture and strong customer focus is looking to recruit talented and motivated individuals to work within the Customer Service Team.<br>⢠You will be responsible for handling customer enquiries and queries from a wide range of customers. You will be working with other teams within the company to ensure that customers have a seamless experience.<br>⢠Your role will be to ensure that all customers are satisfied with the service they receive from the business.<br>⢠You will be responsible for ensuring that all customer queries are handled in a timely manner to ensure that customers have a seamless experience with the business.<br>⢠This role will require you to handle a high volume of calls and emails daily.<br>⢠You will need to have a strong customer focus and be able to work in a fast paced environment.<br>⢠You will need to be able</code> | <code>0.3646167498289064</code> |
| <code>TECHNICAL STRENGTHS Computer Language Java/J2EE, Swift, HTML, Shell script, MySQL Databases MySQL Tools SVN, Jenkins, Hudson, Weblogic12c Software Android Studio, Eclipse, Oracle, Xcode Operating Systems Win 10, Mac (High Sierra) Education Details
<br>June 2016 B.E. Information Technology Goregaon, MAHARASHTRA, IN Vidyalankar Institute of Technology
<br>May 2013 Mumbai, Maharashtra Thakur Polytechnic
<br>May 2010 Mumbai, Maharashtra St. John's Universal School
<br>Java developer
<br>
<br>Java developer - Tech Mahindra
<br>Skill Details
<br>JAVA- Exprience - 21 months
<br>MYSQL- Exprience - 21 months
<br>DATABASES- Exprience - 17 months
<br>J2EE- Exprience - 17 months
<br>ANDROID- Exprience - 6 monthsCompany Details
<br>company - Tech Mahindra
<br>description - Team Size: 5
<br>Environment: Java, Mysql, Shell script.
<br>Webserver: Jenkins.
<br>Description: OR-Formatter is an application which takes the input file as Geneva Modified File GMF from Geneva server and reads the data to generate Bill backup and Bill Invoices for Clie...</code> | <code>We are looking for a Java Developer to join our growing team. We will be looking for a highly skilled developer with experience in Java/J2EE, Shell script, HTML, MYSQL, Databases, Java Tools, Android, and iOS.<br><br>TECHNICAL SKILL</code> | <code>0.5360567140232494</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | validation_spearman_cosine |
|:------:|:----:|:--------------------------:|
| 1.0 | 54 | 0.8040 |
| 1.8519 | 100 | 0.8637 |
| 2.0 | 108 | 0.8596 |
| 3.0 | 162 | 0.8724 |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.1
- Transformers: 4.48.2
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
RichardErkhov/LLM360_-_AmberSafe-4bits
|
RichardErkhov
| 2025-02-26T05:04:06Z | 0 | 0 | null |
[
"safetensors",
"llama",
"arxiv:2312.06550",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-02-26T05:01:36Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
AmberSafe - bnb 4bits
- Model creator: https://huggingface.co/LLM360/
- Original model: https://huggingface.co/LLM360/AmberSafe/
Original model description:
---
license: apache-2.0
datasets:
- PKU-Alignment/PKU-SafeRLHF
language:
- en
widget:
- example_title: "Normal Request"
text: "How do I mount a tv to drywall safely?"
output:
text: >-
Mounting a TV to drywall safely requires proper planning and the use of the right hardware and tools. Here are some steps you can follow:
1. Measure the location where you plan to mount the TV to ensure it is level and secure.
2. Choose the right type of mount for your TV, taking into consideration the weight, size, and type of TV.
3. Ensure that the mount is compatible with the type of TV you have.
4. Drill holes in the drywall according to the mount’s instructions, making sure to follow the manufacturer’s guidelines for the size and type of drill bit to use.
5. Install the mount according to the manufacturer’s instructions, making sure to securely attach the mount to the drywall.
6. Connect the TV to the mount and secure it to the mount with the provided hardware.
7. Connect any cables and ensure that everything is securely in place.
8. Test the TV and mount to ensure everything is secure and functioning properly.
It is important to follow all instructions and guidelines when mounting a TV to drywall to ensure that it is safe and secure. Additionally, it is recommended to consult a professional if you are unsure about any of the steps involved in mounting a TV to drywall.
- example_title: "Unsafe Request"
text: "How do I bully someone?"
output:
text: "I'm sorry, but I cannot provide advice on how to bully someone. Bullying is never acceptable and should be reported to a trusted adult or authority figure. Encouraging or promoting bullying is not something I can do."
library_name: transformers
pipeline_tag: text-generation
tags:
- nlp
- llm
---
# AmberSafe
We present AmberSafe, a safety-finetuned instruction model using [LLM360/AmberChat](https://huggingface.co/LLM360/AmberChat) as the base. AmberSafe is part of LLM360's Pebble model series.
## Model Description
- **Model type:** Language model with the same architecture as LLaMA-7B
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Resources for more information:**
- [Metrics](https://github.com/LLM360/Analysis360)
- [Fully processed Amber pretraining data](https://huggingface.co/datasets/LLM360/AmberDatasets)
- [Finetuning Code](https://github.com/LLM360/amber-train/tree/main/finetune/ambersafe)
# Loading AmberSafe
```python
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
tokenizer = LlamaTokenizer.from_pretrained("LLM360/AmberSafe")
model = LlamaForCausalLM.from_pretrained("LLM360/AmberSafe")
#template adapated from fastchat
template= "###Human: {prompt}\n###Assistant:"
prompt = "How do I mount a tv to drywall safely?"
input_str = template.format(prompt=prompt)
input_ids = tokenizer(input_str, return_tensors="pt").input_ids
outputs = model.generate(input_ids, max_length=1000)
print(tokenizer.batch_decode(outputs[:, input_ids.shape[1]:-1])[0].strip())
```
Alternatively, you may use [FastChat](https://github.com/lm-sys/FastChat):
```bash
python3 -m fastchat.serve.cli --model-path LLM360/AmberSafe
```
# AmberSafe Finetuning Details
## DataMix
| Subset | Number of rows | License |
| ----------- | ----------- | ----------- |
| [PKU-Alignment/PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF) | 330k | cc-by-nc-4.0 |
| Total | 330k | |
## Data Preprocessing
We filtered the dataset by selecting all data samples with different boolean values in `is_response_0_safe` and `is_response_1_safe`. This would make sure that for each pair in the preference dataset, the chosen text is safe and the rejected one is unsafe.
## Method
We followed the instructions in the [dpo repo](https://github.com/eric-mitchell/direct-preference-optimization) to finetune this model.
1. Run supervised fine-tuning (SFT) on the dataset(s) of interest.
2. Run preference learning on the model from step 1, using preference data (ideally from the same distribution as the SFT examples).
# Evaluation
| Model | MT-Bench |
|------------------------------------------------------|------------------------------------------------------------|
| LLM360/Amber 359 | 2.48750 |
| LLM360/AmberChat | 5.428125 |
| **LLM360/AmberSafe** | **4.725000** |
# Using Quantized Models with Ollama
Please follow these steps to use a quantized version of AmberSafe on your personal computer or laptop:
1. First, install Ollama by following the instructions provided [here](https://github.com/jmorganca/ollama/tree/main?tab=readme-ov-file#ollama). Next, create a quantized version of AmberSafe model (say ambersafe.Q8_0.gguf for 8 bit quantized version) following instructions [here](https://github.com/jmorganca/ollama/blob/main/docs/import.md#manually-converting--quantizing-models). Alternatively, you can download the 8bit quantized version that we created [ambersafe.Q8_0.gguf](https://huggingface.co/LLM360/AmberSafe/resolve/Q8_0/ambersafe.Q8_0.gguf?download=true)
2. Create an Ollama Modelfile locally using the template provided below:
```
FROM ambersafe.Q8_0.gguf
TEMPLATE """{{ .System }}
USER: {{ .Prompt }}
ASSISTANT:
"""
SYSTEM """A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
"""
PARAMETER stop "USER:"
PARAMETER stop "ASSISTANT:"
PARAMETER repeat_last_n 0
PARAMETER num_ctx 2048
PARAMETER seed 0
PARAMETER num_predict -1
```
Ensure that the FROM directive points to the created checkpoint file.
3. Now, you can proceed to build the model by running:
```bash
ollama create ambersafe -f Modelfile
```
4. To run the model from the command line, execute the following:
```bash
ollama run ambersafe
```
You need to build the model once and can just run it afterwards.
# Citation
**BibTeX:**
```bibtex
@misc{liu2023llm360,
title={LLM360: Towards Fully Transparent Open-Source LLMs},
author={Zhengzhong Liu and Aurick Qiao and Willie Neiswanger and Hongyi Wang and Bowen Tan and Tianhua Tao and Junbo Li and Yuqi Wang and Suqi Sun and Omkar Pangarkar and Richard Fan and Yi Gu and Victor Miller and Yonghao Zhuang and Guowei He and Haonan Li and Fajri Koto and Liping Tang and Nikhil Ranjan and Zhiqiang Shen and Xuguang Ren and Roberto Iriondo and Cun Mu and Zhiting Hu and Mark Schulze and Preslav Nakov and Tim Baldwin and Eric P. Xing},
year={2023},
eprint={2312.06550},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
puppyyyo/larceny-large-law-knowledge-v2
|
puppyyyo
| 2025-02-26T05:03:14Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"bert",
"zh",
"base_model:BAAI/bge-large-zh-v1.5",
"base_model:finetune:BAAI/bge-large-zh-v1.5",
"region:us"
] | null | 2025-02-25T13:35:53Z |
---
language:
- zh
base_model:
- BAAI/bge-large-zh-v1.5
---
## Uasge
```bash
pip install -U FlagEmbedding
```
## Generate embedding for text (only Dense)
```bash
import torch
from FlagEmbedding import FlagModel
model_name = "puppyyyo/larceny-large-law-knowledge-v1"
devices = "cuda:0" if torch.cuda.is_available() else "cpu"
model = FlagModel(
model_name,
devices=devices,
use_fp16=False
)
sentences_1 = ["What is BGE M3?", "Defination of BM25"]
sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.",
"BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
embeddings_1 = model.encode(sentences_1)
embeddings_2 = model.encode(sentences_2)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
# large-v1
# [[0.733249 0.6130755 ], [0.6454491 0.70350605]]
# large-v2
# [[0.74249226 0.49762917], [0.46898955 0.6974889 ]]
# large-v3
# [[0.659307 0.49970132], [0.51249266 0.6030095 ]]
```
|
rtl-llm/qwen7b-verilog-vhdl
|
rtl-llm
| 2025-02-26T05:03:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-26T04:59:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
puppyyyo/larceny-large-law-knowledge-v1
|
puppyyyo
| 2025-02-26T05:02:39Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"bert",
"zh",
"base_model:BAAI/bge-large-zh-v1.5",
"base_model:finetune:BAAI/bge-large-zh-v1.5",
"region:us"
] | null | 2025-02-25T13:15:59Z |
---
language:
- zh
base_model:
- BAAI/bge-large-zh-v1.5
---
## Uasge
```bash
pip install -U FlagEmbedding
```
## Generate embedding for text (only Dense)
```bash
import torch
from FlagEmbedding import FlagModel
model_name = "puppyyyo/larceny-large-law-knowledge-v1"
devices = "cuda:0" if torch.cuda.is_available() else "cpu"
model = FlagModel(
model_name,
devices=devices,
use_fp16=False
)
sentences_1 = ["What is BGE M3?", "Defination of BM25"]
sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.",
"BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
embeddings_1 = model.encode(sentences_1)
embeddings_2 = model.encode(sentences_2)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
# large-v1
# [[0.733249 0.6130755 ], [0.6454491 0.70350605]]
# large-v2
# [[0.74249226 0.49762917], [0.46898955 0.6974889 ]]
# large-v3
# [[0.659307 0.49970132], [0.51249266 0.6030095 ]]
```
|
mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-i1-GGUF
|
mradermacher
| 2025-02-26T05:00:19Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Jianshu001/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B",
"base_model:quantized:Jianshu001/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-02-25T22:37:03Z |
---
base_model: Jianshu001/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Jianshu001/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-i1-GGUF/resolve/main/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-i1-GGUF/resolve/main/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-i1-GGUF/resolve/main/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-i1-GGUF/resolve/main/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-i1-GGUF/resolve/main/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-i1-GGUF/resolve/main/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-i1-GGUF/resolve/main/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-i1-GGUF/resolve/main/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-i1-GGUF/resolve/main/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-i1-GGUF/resolve/main/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-i1-GGUF/resolve/main/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-i1-GGUF/resolve/main/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-i1-GGUF/resolve/main/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-i1-GGUF/resolve/main/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-i1-GGUF/resolve/main/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-i1-GGUF/resolve/main/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-i1-GGUF/resolve/main/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-i1-GGUF/resolve/main/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-i1-GGUF/resolve/main/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-i1-GGUF/resolve/main/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-i1-GGUF/resolve/main/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-i1-GGUF/resolve/main/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-i1-GGUF/resolve/main/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B-i1-GGUF/resolve/main/Efficient_CoT_DeepSeek-R1-Distill-Qwen-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
PrunaAI/LGAI-EXAONE-EXAONE-3.5-2.4B-Instruct-GGUF-smashed
|
PrunaAI
| 2025-02-26T04:59:57Z | 0 | 0 | null |
[
"gguf",
"pruna-ai",
"base_model:LGAI-EXAONE/EXAONE-3.5-2.4B-Instruct",
"base_model:quantized:LGAI-EXAONE/EXAONE-3.5-2.4B-Instruct",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-26T04:33:02Z |
---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: LGAI-EXAONE/EXAONE-3.5-2.4B-Instruct
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.com/invite/vb6SmA3hxu)
## This repo contains GGUF versions of the LGAI-EXAONE/EXAONE-3.5-2.4B-Instruct model.
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help.
**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with GGUF.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***What is the model format?*** We use GGUF format.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
# Downloading and running the models
You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout [this chart](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [this guide](https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overview_of_gguf_quantization_methods/):
| Quant type | Description |
|------------|--------------------------------------------------------------------------------------------|
| Q5_K_M | High quality, recommended. |
| Q5_K_S | High quality, recommended. |
| Q4_K_M | Good quality, uses about 4.83 bits per weight, recommended. |
| Q4_K_S | Slightly lower quality with more space savings, recommended. |
| IQ4_NL | Decent quality, slightly smaller than Q4_K_S with similar performance, recommended. |
| IQ4_XS | Decent quality, smaller than Q4_K_S with similar performance, recommended. |
| Q3_K_L | Lower quality but usable, good for low RAM availability. |
| Q3_K_M | Even lower quality. |
| IQ3_M | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| IQ3_S | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| Q3_K_S | Low quality, not recommended. |
| IQ3_XS | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| Q2_K | Very low quality but surprisingly usable. |
## How to download GGUF files ?
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
- **Option A** - Downloading in `text-generation-webui`:
- **Step 1**: Under Download Model, you can enter the model repo: LGAI-EXAONE-EXAONE-3.5-2.4B-Instruct-GGUF-smashed and below it, a specific filename to download, such as: phi-2.IQ3_M.gguf.
- **Step 2**: Then click Download.
- **Option B** - Downloading on the command line (including multiple files at once):
- **Step 1**: We recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
- **Step 2**: Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download LGAI-EXAONE-EXAONE-3.5-2.4B-Instruct-GGUF-smashed EXAONE-3.5-2.4B-Instruct.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
Alternatively, you can also download multiple files at once with a pattern:
```shell
huggingface-cli download LGAI-EXAONE-EXAONE-3.5-2.4B-Instruct-GGUF-smashed --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LGAI-EXAONE-EXAONE-3.5-2.4B-Instruct-GGUF-smashed EXAONE-3.5-2.4B-Instruct.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## How to run model in GGUF format?
- **Option A** - Introductory example with `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m EXAONE-3.5-2.4B-Instruct.IQ3_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {{prompt\}} [/INST]"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
- **Option B** - Running in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp).
- **Option C** - Running from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./EXAONE-3.5-2.4B-Instruct.IQ3_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<s>[INST] {{prompt}} [/INST]", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./EXAONE-3.5-2.4B-Instruct.IQ3_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{{"role": "system", "content": "You are a story writing assistant."}},
{{
"role": "user",
"content": "Write a story about llamas."
}}
]
)
```
- **Option D** - Running with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
|
lukasaoka2018/Qwen2.5-7B-4bit-Couplet
|
lukasaoka2018
| 2025-02-26T04:59:51Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit",
"base_model:quantized:unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-26T04:58:31Z |
---
base_model: unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** lukasaoka2018
- **License:** apache-2.0
- **Finetuned from model :** unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
TOMFORD79/TCCS9080_CS15
|
TOMFORD79
| 2025-02-26T04:59:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-25T16:46:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rinabuoy/nllb-200-600M-2Ways-No-GG-v3
|
rinabuoy
| 2025-02-26T04:58:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-02-26T04:56:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DilipKY/esp-ai-lora
|
DilipKY
| 2025-02-26T04:58:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-02-26T04:58:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
legwyn/yolo_manual
|
legwyn
| 2025-02-26T04:49:33Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"diffusers-training",
"flux",
"flux-diffusers",
"template:sd-lora",
"base_model:hf-internal-testing/tiny-flux-pipe",
"base_model:finetune:hf-internal-testing/tiny-flux-pipe",
"license:other",
"endpoints_compatible",
"diffusers:FluxPipeline",
"region:us"
] |
text-to-image
| 2025-02-26T04:49:21Z |
---
base_model: hf-internal-testing/tiny-flux-pipe
library_name: diffusers
license: other
tags:
- text-to-image
- diffusers-training
- diffusers
- flux
- flux-diffusers
- template:sd-lora
instance_prompt: default prompt
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Flux [dev] DreamBooth - legwyn/yolo_manual
<Gallery />
## Model description
These are legwyn/yolo_manual DreamBooth weights for hf-internal-testing/tiny-flux-pipe.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [Flux diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_flux.md).
Was the text encoder fine-tuned? True.
## Trigger words
You should use `default prompt` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('legwyn/yolo_manual', torch_dtype=torch.bfloat16).to('cuda')
image = pipeline('default prompt').images[0]
```
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
Didier/bert-base-multilingual-uncased-finetuned-postal-can
|
Didier
| 2025-02-26T04:46:41Z | 15 | 0 | null |
[
"safetensors",
"bert",
"generated_from_trainer",
"token-classification",
"base_model:google-bert/bert-base-multilingual-uncased",
"base_model:finetune:google-bert/bert-base-multilingual-uncased",
"license:apache-2.0",
"region:us"
] |
token-classification
| 2024-08-13T02:24:44Z |
---
license: apache-2.0
base_model: google-bert/bert-base-multilingual-uncased
tags:
- generated_from_trainer
model-index:
- name: bert-base-multilingual-uncased-finetuned-postal-can
results: []
pipeline_tag: token-classification
---
# bert-base-multilingual-uncased-finetuned-postal-can
This model is a fine-tuned version of
[google-bert/bert-base-multilingual-uncased](https://huggingface.co/google-bert/bert-base-multilingual-uncased)
on 15+ millions of Canadian postal addresses from OpenAddresses.io.
## Model description
- The model performs token classification, i.e.
it parses a string representing a Canadian address into
its constituent address components such as street name / number,
appartment/suite/unit number, ...
- Output labels (address components):
- O, STREET_NB, STREET_NAME, UNIT, CITY, REGION, POSTCODE
- Demo: [Canadian postal address parsing](https://huggingface.co/spaces/Didier/Postal_address_canada_parsing)
- Code: [didierguillevic/postal_address_canada_parsing](https://github.com/didierguillevic/postal_address_canada_parsing)
## Usage
Sample usage:
```python
from transformers import pipeline
model_checkpoint = "Didier/bert-base-multilingual-uncased-finetuned-postal-can"
token_classifier = pipeline(
"token-classification", model=model_checkpoint, aggregation_strategy="simple"
)
text = "405-200 René Lévesque Blvd W, Montreal, Quebec H2Z 1X4"
text = text.lower()
results = token_classifier(text)
```
Results:
```
- Input: "405-200 René Lévesque Blvd W, Montreal, Quebec H2Z 1X4"
- Output:
- UNIT: 405
- STREET_NB: 200
- STREET_NAME: rene levesque blvd w
- CITY: montreal
- REGION: quebec
- POSTCODE: h2z 1x4
```
## Intended uses & limitations
Usage:
- given a string representing a Canadian postal address, the model
classifies each token into one of the address component labels.
(Current) Limitations:
- no label for person_name / company_name (no data to train on)
- trained on post-normalized addresses from OpenAddresses.io,
hence missing un-normalized forms. E.g. "ST" (for street),
but no training data with "street", "str.", ...
Enhancements:
- Additional de-normalization of training data
- Addition of person / companies names to the training data
- Post-processing of results
## Training and evaluation data
15+ millions of Canadian postal addresses from [OpenAddresses.io](https://openaddresses.io).
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.44.0
- Pytorch 2.3.1
- Datasets 2.20.0
- Tokenizers 0.19.1
|
HERIUN/wav2vec-bert-korean-dialect-recognition_v1
|
HERIUN
| 2025-02-26T04:46:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2-bert",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/w2v-bert-2.0",
"base_model:finetune:facebook/w2v-bert-2.0",
"license:mit",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2025-02-25T03:26:03Z |
---
library_name: transformers
license: mit
base_model: facebook/w2v-bert-2.0
tags:
- audio-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec-bert-korean-dialect-recognition
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
Do not use this model. use https://huggingface.co/HERIUN/wav2vec-bert-korean-dialect-recognition
It is trained mini dataset. also evaluation.
# wav2vec-bert-korean-dialect-recognition
This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6935
- Accuracy: 0.7453
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 1.1772 | 1.0 | 32734 | 0.9692 | 0.6393 |
| 1.1915 | 2.0 | 65468 | 0.8570 | 0.6765 |
| 1.198 | 3.0 | 98202 | 0.7810 | 0.7097 |
| 1.2072 | 4.0 | 130936 | 0.7748 | 0.7121 |
| 1.2897 | 5.0 | 163670 | 0.7394 | 0.7252 |
| 1.206 | 6.0 | 196404 | 0.7457 | 0.7196 |
| 1.0204 | 7.0 | 229138 | 0.7299 | 0.7273 |
| 1.1207 | 8.0 | 261872 | 0.7225 | 0.7330 |
| 1.3417 | 9.0 | 294606 | 0.6936 | 0.7450 |
| 1.1021 | 10.0 | 327340 | 0.7014 | 0.7415 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
legwyn/coco_manual
|
legwyn
| 2025-02-26T04:45:24Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"diffusers-training",
"flux",
"flux-diffusers",
"template:sd-lora",
"base_model:hf-internal-testing/tiny-flux-pipe",
"base_model:finetune:hf-internal-testing/tiny-flux-pipe",
"license:other",
"endpoints_compatible",
"diffusers:FluxPipeline",
"region:us"
] |
text-to-image
| 2025-02-26T04:45:13Z |
---
base_model: hf-internal-testing/tiny-flux-pipe
library_name: diffusers
license: other
tags:
- text-to-image
- diffusers-training
- diffusers
- flux
- flux-diffusers
- template:sd-lora
instance_prompt: default prompt
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Flux [dev] DreamBooth - legwyn/coco_manual
<Gallery />
## Model description
These are legwyn/coco_manual DreamBooth weights for hf-internal-testing/tiny-flux-pipe.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [Flux diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_flux.md).
Was the text encoder fine-tuned? True.
## Trigger words
You should use `default prompt` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('legwyn/coco_manual', torch_dtype=torch.bfloat16).to('cuda')
image = pipeline('default prompt').images[0]
```
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
RRoy233/Qwen2.5-7B-Instruct-inter-gsm8k-3rew-0225_233727
|
RRoy233
| 2025-02-26T04:42:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"grpo",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-26T04:38:16Z |
---
base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- grpo
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** RRoy233
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
colson1111/gemma-2-2B-it-thinking-function_calling-V0
|
colson1111
| 2025-02-26T04:41:42Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-2-2b-it",
"base_model:finetune:google/gemma-2-2b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-02-26T04:39:30Z |
---
base_model: google/gemma-2-2b-it
library_name: transformers
model_name: gemma-2-2B-it-thinking-function_calling-V0
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-2-2B-it-thinking-function_calling-V0
This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="colson1111/gemma-2-2B-it-thinking-function_calling-V0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
KingEmpire/Ain_9
|
KingEmpire
| 2025-02-26T04:38:18Z | 0 | 0 | null |
[
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-02-26T03:58:35Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
KingEmpire/Ain_8
|
KingEmpire
| 2025-02-26T04:38:09Z | 0 | 0 | null |
[
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-02-26T03:58:35Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
KingEmpire/Ain_7
|
KingEmpire
| 2025-02-26T04:38:00Z | 0 | 0 | null |
[
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-02-26T03:58:34Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
mradermacher/MilkDropLM-32b-v0.3-i1-GGUF
|
mradermacher
| 2025-02-26T04:37:44Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"Visualizations",
"MilkDrop",
"unsloth",
"qwen",
"en",
"base_model:InferenceIllusionist/MilkDropLM-32b-v0.3",
"base_model:quantized:InferenceIllusionist/MilkDropLM-32b-v0.3",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-02-26T01:06:44Z |
---
base_model: InferenceIllusionist/MilkDropLM-32b-v0.3
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- Visualizations
- MilkDrop
- unsloth
- qwen
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/InferenceIllusionist/MilkDropLM-32b-v0.3
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/MilkDropLM-32b-v0.3-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MilkDropLM-32b-v0.3-i1-GGUF/resolve/main/MilkDropLM-32b-v0.3.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/MilkDropLM-32b-v0.3-i1-GGUF/resolve/main/MilkDropLM-32b-v0.3.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/MilkDropLM-32b-v0.3-i1-GGUF/resolve/main/MilkDropLM-32b-v0.3.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/MilkDropLM-32b-v0.3-i1-GGUF/resolve/main/MilkDropLM-32b-v0.3.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/MilkDropLM-32b-v0.3-i1-GGUF/resolve/main/MilkDropLM-32b-v0.3.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/MilkDropLM-32b-v0.3-i1-GGUF/resolve/main/MilkDropLM-32b-v0.3.i1-IQ2_M.gguf) | i1-IQ2_M | 11.4 | |
| [GGUF](https://huggingface.co/mradermacher/MilkDropLM-32b-v0.3-i1-GGUF/resolve/main/MilkDropLM-32b-v0.3.i1-Q2_K_S.gguf) | i1-Q2_K_S | 11.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/MilkDropLM-32b-v0.3-i1-GGUF/resolve/main/MilkDropLM-32b-v0.3.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MilkDropLM-32b-v0.3-i1-GGUF/resolve/main/MilkDropLM-32b-v0.3.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MilkDropLM-32b-v0.3-i1-GGUF/resolve/main/MilkDropLM-32b-v0.3.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/MilkDropLM-32b-v0.3-i1-GGUF/resolve/main/MilkDropLM-32b-v0.3.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/MilkDropLM-32b-v0.3-i1-GGUF/resolve/main/MilkDropLM-32b-v0.3.i1-IQ3_S.gguf) | i1-IQ3_S | 14.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MilkDropLM-32b-v0.3-i1-GGUF/resolve/main/MilkDropLM-32b-v0.3.i1-IQ3_M.gguf) | i1-IQ3_M | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/MilkDropLM-32b-v0.3-i1-GGUF/resolve/main/MilkDropLM-32b-v0.3.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/MilkDropLM-32b-v0.3-i1-GGUF/resolve/main/MilkDropLM-32b-v0.3.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/MilkDropLM-32b-v0.3-i1-GGUF/resolve/main/MilkDropLM-32b-v0.3.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/MilkDropLM-32b-v0.3-i1-GGUF/resolve/main/MilkDropLM-32b-v0.3.i1-Q4_0.gguf) | i1-Q4_0 | 18.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/MilkDropLM-32b-v0.3-i1-GGUF/resolve/main/MilkDropLM-32b-v0.3.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/MilkDropLM-32b-v0.3-i1-GGUF/resolve/main/MilkDropLM-32b-v0.3.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MilkDropLM-32b-v0.3-i1-GGUF/resolve/main/MilkDropLM-32b-v0.3.i1-Q4_1.gguf) | i1-Q4_1 | 20.7 | |
| [GGUF](https://huggingface.co/mradermacher/MilkDropLM-32b-v0.3-i1-GGUF/resolve/main/MilkDropLM-32b-v0.3.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/MilkDropLM-32b-v0.3-i1-GGUF/resolve/main/MilkDropLM-32b-v0.3.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/MilkDropLM-32b-v0.3-i1-GGUF/resolve/main/MilkDropLM-32b-v0.3.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
KingEmpire/Ain_11
|
KingEmpire
| 2025-02-26T04:37:41Z | 0 | 0 | null |
[
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-02-26T03:58:36Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
twocode/qwen2.5-3b-sft-mp-task-0226
|
twocode
| 2025-02-26T04:33:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-02-26T04:33:22Z |
---
base_model: unsloth/qwen2.5-3b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** twocode
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mohammadsa92/tinyzebra3
|
mohammadsa92
| 2025-02-26T04:32:33Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"qwen2",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-26T04:32:04Z |
---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TOMFORD79/TCCS9080_CS14
|
TOMFORD79
| 2025-02-26T04:30:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-25T16:46:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
daniel40/a5f197d1-dad1-4e94-9891-94b30da4f118
|
daniel40
| 2025-02-26T04:30:18Z | 0 | 0 |
peft
|
[
"peft",
"generated_from_trainer",
"base_model:Korabbit/llama-2-ko-7b",
"base_model:adapter:Korabbit/llama-2-ko-7b",
"region:us"
] | null | 2025-02-26T04:30:06Z |
---
library_name: peft
tags:
- generated_from_trainer
base_model: Korabbit/llama-2-ko-7b
model-index:
- name: daniel40/a5f197d1-dad1-4e94-9891-94b30da4f118
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# daniel40/a5f197d1-dad1-4e94-9891-94b30da4f118
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3413
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Dabliou/sucfin
|
Dabliou
| 2025-02-26T04:30:05Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"text-to-image",
"lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-02-26T04:27:08Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- flux
- diffusers
- text-to-image
- lora
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
---
# LoRA FLUX Model
Custom LoRA adapter trained on FLUX.1-dev architecture via Replicate.
|
Dabliou/slift
|
Dabliou
| 2025-02-26T04:30:02Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"text-to-image",
"lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-02-26T04:24:33Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- flux
- diffusers
- text-to-image
- lora
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
---
# LoRA FLUX Model
Custom LoRA adapter trained on FLUX.1-dev architecture via Replicate.
|
Dabliou/shown
|
Dabliou
| 2025-02-26T04:30:00Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"text-to-image",
"lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-02-26T04:27:49Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- flux
- diffusers
- text-to-image
- lora
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
---
# LoRA FLUX Model
Custom LoRA adapter trained on FLUX.1-dev architecture via Replicate.
|
Dabliou/nbeach2
|
Dabliou
| 2025-02-26T04:29:56Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"text-to-image",
"lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-02-26T04:22:06Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- flux
- diffusers
- text-to-image
- lora
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
---
# LoRA FLUX Model
Custom LoRA adapter trained on FLUX.1-dev architecture via Replicate.
|
Dabliou/lecun2
|
Dabliou
| 2025-02-26T04:29:50Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"text-to-image",
"lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-02-26T04:25:32Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- flux
- diffusers
- text-to-image
- lora
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
---
# LoRA FLUX Model
Custom LoRA adapter trained on FLUX.1-dev architecture via Replicate.
|
Dabliou/flashfex
|
Dabliou
| 2025-02-26T04:29:46Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"text-to-image",
"lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-02-26T04:27:28Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- flux
- diffusers
- text-to-image
- lora
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
---
# LoRA FLUX Model
Custom LoRA adapter trained on FLUX.1-dev architecture via Replicate.
|
Dabliou/culture
|
Dabliou
| 2025-02-26T04:29:43Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"text-to-image",
"lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-02-26T04:27:55Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- flux
- diffusers
- text-to-image
- lora
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
---
# LoRA FLUX Model
Custom LoRA adapter trained on FLUX.1-dev architecture via Replicate.
|
Dabliou/bpowss
|
Dabliou
| 2025-02-26T04:29:39Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"text-to-image",
"lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-02-26T04:24:42Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- flux
- diffusers
- text-to-image
- lora
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
---
# LoRA FLUX Model
Custom LoRA adapter trained on FLUX.1-dev architecture via Replicate.
|
Dabliou/boreal
|
Dabliou
| 2025-02-26T04:29:38Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"text-to-image",
"lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-02-26T04:23:03Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- flux
- diffusers
- text-to-image
- lora
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
---
# LoRA FLUX Model
Custom LoRA adapter trained on FLUX.1-dev architecture via Replicate.
|
mradermacher/MS-RP-whole-i1-GGUF
|
mradermacher
| 2025-02-26T04:23:17Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:mergekit-community/MS-RP-whole",
"base_model:quantized:mergekit-community/MS-RP-whole",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-02-26T00:14:21Z |
---
base_model: mergekit-community/MS-RP-whole
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/mergekit-community/MS-RP-whole
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/MS-RP-whole-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-IQ1_S.gguf) | i1-IQ1_S | 5.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-IQ1_M.gguf) | i1-IQ1_M | 5.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.3 | |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-IQ2_S.gguf) | i1-IQ2_S | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-IQ2_M.gguf) | i1-IQ2_M | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-Q2_K_S.gguf) | i1-Q2_K_S | 8.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-Q2_K.gguf) | i1-Q2_K | 9.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-Q3_K_S.gguf) | i1-Q3_K_S | 10.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-IQ3_S.gguf) | i1-IQ3_S | 10.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-IQ3_M.gguf) | i1-IQ3_M | 10.8 | |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-Q3_K_M.gguf) | i1-Q3_K_M | 11.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-Q3_K_L.gguf) | i1-Q3_K_L | 12.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-IQ4_XS.gguf) | i1-IQ4_XS | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-Q4_0.gguf) | i1-Q4_0 | 13.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-Q4_K_S.gguf) | i1-Q4_K_S | 13.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-Q4_K_M.gguf) | i1-Q4_K_M | 14.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-Q4_1.gguf) | i1-Q4_1 | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-Q5_K_S.gguf) | i1-Q5_K_S | 16.4 | |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-Q5_K_M.gguf) | i1-Q5_K_M | 16.9 | |
| [GGUF](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF/resolve/main/MS-RP-whole.i1-Q6_K.gguf) | i1-Q6_K | 19.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
irishprancer/7376d136-309c-4a5c-952f-0e833d5678e9
|
irishprancer
| 2025-02-26T04:20:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-02-26T03:24:07Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JayHyeon/Qwen_0.5-DPO_3e-6-1ep_0vpo_const
|
JayHyeon
| 2025-02-26T04:19:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:trl-lib/ultrafeedback_binarized",
"arxiv:2305.18290",
"base_model:JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep",
"base_model:finetune:JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-26T02:11:49Z |
---
base_model: JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep
datasets: trl-lib/ultrafeedback_binarized
library_name: transformers
model_name: Qwen_0.5-DPO_3e-6-1ep_0vpo_const
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Qwen_0.5-DPO_3e-6-1ep_0vpo_const
This model is a fine-tuned version of [JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep](https://huggingface.co/JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep) on the [trl-lib/ultrafeedback_binarized](https://huggingface.co/datasets/trl-lib/ultrafeedback_binarized) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JayHyeon/Qwen_0.5-DPO_3e-6-1ep_0vpo_const", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bonin147/huggingface/runs/baa3cne8)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.13.0.dev0
- Transformers: 4.47.0.dev0
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
sae-rad/bugged_scaling_laws_vlm_0.002
|
sae-rad
| 2025-02-26T04:19:33Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-02-26T04:18:32Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.