modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-02 18:27:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 549
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-02 18:24:50
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ruan-xf/intern_study_L0_4
|
ruan-xf
| 2025-03-03T15:45:39Z | 0 | 0 | null |
[
"safetensors",
"internlm2",
"custom_code",
"region:us"
] | null | 2025-03-03T15:41:19Z |
# 书生浦语大模型实战营camp4
- hugging face模型上传测试
- 更多内容请访问 https://github.com/InternLM/Tutorial/tree/camp4
|
kweener/qwen-finetuned-Craig-7B-epoch10
|
kweener
| 2025-03-03T15:44:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-03-03T15:28:21Z |
---
library_name: transformers
base_model: Qwen/Qwen2.5-VL-3B-Instruct
tags:
- generated_from_trainer
model-index:
- name: qwen-finetuned-Craig-7B-epoch10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen-finetuned-Craig-7B-epoch10
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2237
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5635 | 1.0 | 25 | 13.0147 |
| 3.428 | 2.0 | 50 | 3.9008 |
| 0.6697 | 3.0 | 75 | 0.8603 |
| 0.6301 | 4.0 | 100 | 0.3525 |
| 0.4065 | 5.0 | 125 | 0.2897 |
| 0.2677 | 6.0 | 150 | 0.2492 |
| 0.3797 | 7.0 | 175 | 0.2400 |
| 0.3805 | 8.0 | 200 | 0.2237 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
chi-vi/WN-zh-vi-sim-v0.3-GPTQ-Int4
|
chi-vi
| 2025-03-03T15:43:19Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"zh",
"vi",
"license:cc-by-nc-sa-4.0",
"4-bit",
"gptq",
"region:us"
] | null | 2025-03-03T13:34:01Z |
---
license: cc-by-nc-sa-4.0
language:
- zh
- vi
---
[WN-zh-vi-sim-v0.3](https://huggingface.co/CjangCjengh/WN-zh-vi-sim-v0.3)的GPTQ Int4量化版本
Bản quant GPTQ Int4 của [WN-zh-vi-sim-v0.3](https://huggingface.co/CjangCjengh/WN-zh-vi-sim-v0.3)
模型用于对齐中文文本和越南语文本
Mô hình dùng để align văn bản tiếng Trung và tiếng Việt
```python
import os
import json
import torch
import torch.nn.functional as F
from vllm import LLM
from vllm.config import PoolerConfig
from huggingface_hub import hf_hub_download
model_path = 'CjangCjengh/WN-zh-vi-sim-v0.3-GPTQ-Int4'
zh_text_path = 'zh.txt'
vi_text_path = 'vi.txt'
output_path = 'output.json'
save_interval = 50
device = 'cuda'
cpu_offload_gb = 0
lm_head_filename = 'yes_no_lm_head.pt'
lm_head_path = hf_hub_download(repo_id=model_path, filename=lm_head_filename, local_dir='.')
zh_idx = 0
vi_idx = 0
max_extra_lines = 5
align_list = []
if os.path.exists(output_path):
with open(output_path,'r',encoding='utf-8') as f:
align_list = json.load(f)
zh_idx = sum([i['zh'].count('\n')+1 for i in align_list if i['zh']])
vi_idx = sum([i['vi'].count('\n')+1 for i in align_list if i['vi']])
lm_head = torch.load(lm_head_path)
lm_head.to(device)
llm = LLM(model=model_path, cpu_offload_gb=cpu_offload_gb, enforce_eager=True, task='embed', override_pooler_config=PoolerConfig(pooling_type='ALL'))
zh_lines = open(zh_text_path,'r',encoding='utf-8').readlines()
vi_lines = open(vi_text_path,'r',encoding='utf-8').readlines()
zh_lines = [l.strip() for l in zh_lines]
vi_lines = [l.strip() for l in vi_lines]
def get_sim_score(src_text, tgt_text):
text = f'<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n<|im_start|>user\n下面的中文段落和越南语段落的内容是否完全对应,不存在缺漏?(回答Yes或No)\n\n中文:\n{src_text}\n\n越南语:{tgt_text}<|im_end|>\n<|im_start|>assistant\n'
outputs = llm.encode(text)
with torch.inference_mode():
hidden_states = outputs[0].outputs.data[-1].to(dtype=lm_head.dtype).to(device)
logits = torch.matmul(lm_head, hidden_states)
result = F.softmax(logits, dim=0).tolist()
return result[0]
def generate_pairs():
visited = set()
size = 1
while True:
for x in range(size + 1):
for y in range(size + 1):
if (x, y) not in visited:
visited.add((x, y))
yield (x+1, y+1)
size += 1
while zh_idx < len(zh_lines) and vi_idx < len(vi_lines):
for zh_i, vi_i in generate_pairs():
zh_text = ''.join(zh_lines[zh_idx:zh_idx+zh_i])
vi_text = ' '.join(vi_lines[vi_idx:vi_idx+vi_i])
score = get_sim_score(zh_text, vi_text)
if score > 0.5:
break
if zh_i == 1 and vi_i == 1:
continue
score = get_sim_score(zh_lines[zh_idx+zh_i-1], vi_lines[vi_idx+vi_i-1])
if score > 0.5:
zh_i -= 1
vi_i -= 1
break
if zh_i > max_extra_lines or vi_i > max_extra_lines:
end_flag = False
for zh_i, vi_i in generate_pairs():
if zh_i == 1 and vi_i == 1:
continue
for zh_i_offset, vi_i_offset in generate_pairs():
zh_start = zh_idx+zh_i-1
vi_start = vi_idx+vi_i-1
if zh_i+zh_i_offset > max_extra_lines and vi_i+vi_i_offset > max_extra_lines:
break
if zh_i+zh_i_offset > max_extra_lines or vi_i+vi_i_offset > max_extra_lines:
continue
zh_text = ''.join(zh_lines[zh_start:zh_start+zh_i_offset])
vi_text = ' '.join(vi_lines[vi_start:vi_start+vi_i_offset])
score = get_sim_score(zh_text, vi_text)
if score > 0.5:
zh_i -= 1
vi_i -= 1
end_flag = True
break
if end_flag:
break
if zh_i > max_extra_lines or vi_i > max_extra_lines:
with open(output_path,'w',encoding='utf-8') as f:
json.dump(align_list, f, ensure_ascii=False, indent=0)
raise Exception(f'Error! zh line No.{zh_idx+1} vi line No.{vi_idx+1}')
zh_text = '\n'.join(zh_lines[zh_idx:zh_idx+zh_i])
vi_text = '\n'.join(vi_lines[vi_idx:vi_idx+vi_i])
align_list.append({'zh':zh_text, 'vi':vi_text})
new_align = [list(range(zh_idx, zh_idx+zh_i)), list(range(vi_idx, vi_idx+vi_i))]
print(new_align)
if len(align_list) % save_interval == 0:
with open(output_path,'w',encoding='utf-8') as f:
json.dump(align_list, f, ensure_ascii=False, indent=0)
zh_idx += zh_i
vi_idx += vi_i
with open(output_path,'w',encoding='utf-8') as f:
json.dump(align_list, f, ensure_ascii=False, indent=0)
```
|
memevis/SG22
|
memevis
| 2025-03-03T15:41:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-03T15:35:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Otakadelic/MT-Merge7-gemma-2-9B-Q4_K_M-GGUF
|
Otakadelic
| 2025-03-03T15:40:12Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:zelk12/MT-Merge7-gemma-2-9B",
"base_model:quantized:zelk12/MT-Merge7-gemma-2-9B",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-03-03T15:39:47Z |
---
base_model: zelk12/MT-Merge7-gemma-2-9B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
license: gemma
pipeline_tag: text-generation
---
# Otakadelic/MT-Merge7-gemma-2-9B-Q4_K_M-GGUF
This model was converted to GGUF format from [`zelk12/MT-Merge7-gemma-2-9B`](https://huggingface.co/zelk12/MT-Merge7-gemma-2-9B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/zelk12/MT-Merge7-gemma-2-9B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Otakadelic/MT-Merge7-gemma-2-9B-Q4_K_M-GGUF --hf-file mt-merge7-gemma-2-9b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Otakadelic/MT-Merge7-gemma-2-9B-Q4_K_M-GGUF --hf-file mt-merge7-gemma-2-9b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Otakadelic/MT-Merge7-gemma-2-9B-Q4_K_M-GGUF --hf-file mt-merge7-gemma-2-9b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Otakadelic/MT-Merge7-gemma-2-9B-Q4_K_M-GGUF --hf-file mt-merge7-gemma-2-9b-q4_k_m.gguf -c 2048
```
|
dabrown/168c366d-5b16-4f56-adca-3b5025d1677f
|
dabrown
| 2025-03-03T15:40:11Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-1.5B",
"base_model:adapter:unsloth/Qwen2.5-1.5B",
"license:apache-2.0",
"region:us"
] | null | 2025-03-03T14:26:23Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 168c366d-5b16-4f56-adca-3b5025d1677f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.6.0`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-1.5B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0ff973c3852970c4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0ff973c3852970c4_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: false
group_by_length: true
hub_model_id: dabrown/168c366d-5b16-4f56-adca-3b5025d1677f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: false
lora_inference_mode: true
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 1500
micro_batch_size: 2
mlflow_experiment_name: /tmp/0ff973c3852970c4_train_data.json
model_type: AutoModelForCausalLM
modules_to_save: lm_head
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
peft_use_rslora: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: offline
wandb_name: b1e23278-252e-44d7-9491-1b28d344421c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b1e23278-252e-44d7-9491-1b28d344421c
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 168c366d-5b16-4f56-adca-3b5025d1677f
This model is a fine-tuned version of [unsloth/Qwen2.5-1.5B](https://huggingface.co/unsloth/Qwen2.5-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9572
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 1500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.8222 | 0.0600 | 375 | 2.0474 |
| 2.848 | 0.1201 | 750 | 1.9939 |
| 1.8269 | 0.1801 | 1125 | 1.9629 |
| 2.7946 | 0.2401 | 1500 | 1.9572 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Romain-XV/593ea626-37f4-40fd-b157-315456d1cfd5
|
Romain-XV
| 2025-03-03T15:38:44Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-1.8B",
"base_model:adapter:Qwen/Qwen1.5-1.8B",
"license:other",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-03-03T14:52:45Z |
---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-1.8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 593ea626-37f4-40fd-b157-315456d1cfd5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-1.8B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2a4a709f44d46b03_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2a4a709f44d46b03_train_data.json
type:
field_instruction: premise
field_output: hypothesis
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 4
eval_max_new_tokens: 128
eval_steps: 150
eval_table_size: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: Romain-XV/593ea626-37f4-40fd-b157-315456d1cfd5
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_best_model_at_end: true
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.3
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 7140
micro_batch_size: 2
mlflow_experiment_name: /tmp/2a4a709f44d46b03_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 150
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.029591927322226496
wandb_entity: null
wandb_mode: online
wandb_name: 5769b20b-d2ca-4e10-83d7-ac3e3b76ff18
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5769b20b-d2ca-4e10-83d7-ac3e3b76ff18
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 593ea626-37f4-40fd-b157-315456d1cfd5
This model is a fine-tuned version of [Qwen/Qwen1.5-1.8B](https://huggingface.co/Qwen/Qwen1.5-1.8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9776
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 7140
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.2081 | 0.0000 | 1 | 3.7305 |
| 2.2224 | 0.0073 | 150 | 2.0010 |
| 2.2405 | 0.0146 | 300 | 1.9842 |
| 2.1368 | 0.0220 | 450 | 1.9894 |
| 2.0808 | 0.0293 | 600 | 1.9757 |
| 1.9111 | 0.0366 | 750 | 1.9764 |
| 1.7847 | 0.0439 | 900 | 1.9842 |
| 1.7949 | 0.0512 | 1050 | 1.9797 |
| 2.2459 | 0.0585 | 1200 | 1.9776 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
distributed/optimized-gpt2-2b
|
distributed
| 2025-03-03T15:38:13Z | 138,501 | 5 |
transformers
|
[
"transformers",
"safetensors",
"gpt_optimized",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-11-18T15:32:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MrRobotoAI/D
|
MrRobotoAI
| 2025-03-03T15:36:40Z | 28 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:Blackroot/Llama-3-LongStory-LORA",
"base_model:merge:Blackroot/Llama-3-LongStory-LORA",
"base_model:Hastagaras/Jamet-8B-L3-MK.V-Blackroot",
"base_model:merge:Hastagaras/Jamet-8B-L3-MK.V-Blackroot",
"base_model:MrRobotoAI/155",
"base_model:merge:MrRobotoAI/155",
"base_model:MrRobotoAI/157",
"base_model:merge:MrRobotoAI/157",
"base_model:MrRobotoAI/218",
"base_model:merge:MrRobotoAI/218",
"base_model:MrRobotoAI/221",
"base_model:merge:MrRobotoAI/221",
"base_model:MrRobotoAI/227",
"base_model:merge:MrRobotoAI/227",
"base_model:MrRobotoAI/228",
"base_model:merge:MrRobotoAI/228",
"base_model:MrRobotoAI/229",
"base_model:merge:MrRobotoAI/229",
"base_model:MrRobotoAI/230",
"base_model:merge:MrRobotoAI/230",
"base_model:MrRobotoAI/231",
"base_model:merge:MrRobotoAI/231",
"base_model:MrRobotoAI/93",
"base_model:merge:MrRobotoAI/93",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-02T21:54:05Z |
---
base_model:
- MrRobotoAI/218
- MrRobotoAI/227
- MrRobotoAI/221
- MrRobotoAI/230
- Blackroot/Llama-3-LongStory-LORA
- MrRobotoAI/231
- Hastagaras/Jamet-8B-L3-MK.V-Blackroot
- Blackroot/Llama-3-LongStory-LORA
- MrRobotoAI/155
- MrRobotoAI/229
- MrRobotoAI/228
- MrRobotoAI/157
- MrRobotoAI/93
library_name: transformers
tags:
- mergekit
- merge
---
# merge 14,138
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [MrRobotoAI/228](https://huggingface.co/MrRobotoAI/228) as a base.
### Models Merged
The following models were included in the merge:
* [MrRobotoAI/218](https://huggingface.co/MrRobotoAI/218)
* [MrRobotoAI/227](https://huggingface.co/MrRobotoAI/227)
* [MrRobotoAI/221](https://huggingface.co/MrRobotoAI/221)
* [MrRobotoAI/230](https://huggingface.co/MrRobotoAI/230) + [Blackroot/Llama-3-LongStory-LORA](https://huggingface.co/Blackroot/Llama-3-LongStory-LORA)
* [MrRobotoAI/231](https://huggingface.co/MrRobotoAI/231)
* [Hastagaras/Jamet-8B-L3-MK.V-Blackroot](https://huggingface.co/Hastagaras/Jamet-8B-L3-MK.V-Blackroot) + [Blackroot/Llama-3-LongStory-LORA](https://huggingface.co/Blackroot/Llama-3-LongStory-LORA)
* [MrRobotoAI/155](https://huggingface.co/MrRobotoAI/155)
* [MrRobotoAI/229](https://huggingface.co/MrRobotoAI/229)
* [MrRobotoAI/157](https://huggingface.co/MrRobotoAI/157)
* [MrRobotoAI/93](https://huggingface.co/MrRobotoAI/93)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: MrRobotoAI/93 # 13,552
- model: MrRobotoAI/155 # 13,658
- model: MrRobotoAI/157 # 12,931
- model: MrRobotoAI/218 # 14,970
- model: MrRobotoAI/221 # 10,647+
- model: MrRobotoAI/227
- model: MrRobotoAI/229
- model: MrRobotoAI/230+Blackroot/Llama-3-LongStory-LORA
- model: MrRobotoAI/231
- model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot+Blackroot/Llama-3-LongStory-LORA
merge_method: model_stock
base_model: MrRobotoAI/228
normalize: true
dtype: float16
```
|
texanrangee/3bf5a023-a694-4f45-ab7b-bdd12003496c
|
texanrangee
| 2025-03-03T15:36:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-03T11:15:12Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Otakadelic/MT-Merge7-gemma-2-9B-Q8_0-GGUF
|
Otakadelic
| 2025-03-03T15:33:33Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:zelk12/MT-Merge7-gemma-2-9B",
"base_model:quantized:zelk12/MT-Merge7-gemma-2-9B",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-03-03T15:32:45Z |
---
base_model: zelk12/MT-Merge7-gemma-2-9B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
license: gemma
pipeline_tag: text-generation
---
# Otakadelic/MT-Merge7-gemma-2-9B-Q8_0-GGUF
This model was converted to GGUF format from [`zelk12/MT-Merge7-gemma-2-9B`](https://huggingface.co/zelk12/MT-Merge7-gemma-2-9B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/zelk12/MT-Merge7-gemma-2-9B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Otakadelic/MT-Merge7-gemma-2-9B-Q8_0-GGUF --hf-file mt-merge7-gemma-2-9b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Otakadelic/MT-Merge7-gemma-2-9B-Q8_0-GGUF --hf-file mt-merge7-gemma-2-9b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Otakadelic/MT-Merge7-gemma-2-9B-Q8_0-GGUF --hf-file mt-merge7-gemma-2-9b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Otakadelic/MT-Merge7-gemma-2-9B-Q8_0-GGUF --hf-file mt-merge7-gemma-2-9b-q8_0.gguf -c 2048
```
|
MrRobotoAI/A
|
MrRobotoAI
| 2025-03-03T15:32:35Z | 21 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:MrRobotoAI/155",
"base_model:merge:MrRobotoAI/155",
"base_model:MrRobotoAI/218",
"base_model:merge:MrRobotoAI/218",
"base_model:MrRobotoAI/227",
"base_model:merge:MrRobotoAI/227",
"base_model:MrRobotoAI/228",
"base_model:merge:MrRobotoAI/228",
"base_model:MrRobotoAI/229",
"base_model:merge:MrRobotoAI/229",
"base_model:MrRobotoAI/231",
"base_model:merge:MrRobotoAI/231",
"base_model:MrRobotoAI/93",
"base_model:merge:MrRobotoAI/93",
"base_model:hf-100/Llama-3.1-Spellbound-StoryWriter-0.1-lora",
"base_model:merge:hf-100/Llama-3.1-Spellbound-StoryWriter-0.1-lora",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-02T00:57:42Z |
---
base_model:
- MrRobotoAI/155
- hf-100/Llama-3.1-Spellbound-StoryWriter-0.1-lora
- MrRobotoAI/227
- hf-100/Llama-3.1-Spellbound-StoryWriter-0.1-lora
- MrRobotoAI/93
- hf-100/Llama-3.1-Spellbound-StoryWriter-0.1-lora
- MrRobotoAI/231
- MrRobotoAI/228
- MrRobotoAI/229
- MrRobotoAI/218
- hf-100/Llama-3.1-Spellbound-StoryWriter-0.1-lora
library_name: transformers
tags:
- mergekit
- merge
---
# merge 13,017
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [MrRobotoAI/228](https://huggingface.co/MrRobotoAI/228) as a base.
### Models Merged
The following models were included in the merge:
* [MrRobotoAI/155](https://huggingface.co/MrRobotoAI/155) + [hf-100/Llama-3.1-Spellbound-StoryWriter-0.1-lora](https://huggingface.co/hf-100/Llama-3.1-Spellbound-StoryWriter-0.1-lora)
* [MrRobotoAI/227](https://huggingface.co/MrRobotoAI/227) + [hf-100/Llama-3.1-Spellbound-StoryWriter-0.1-lora](https://huggingface.co/hf-100/Llama-3.1-Spellbound-StoryWriter-0.1-lora)
* [MrRobotoAI/93](https://huggingface.co/MrRobotoAI/93) + [hf-100/Llama-3.1-Spellbound-StoryWriter-0.1-lora](https://huggingface.co/hf-100/Llama-3.1-Spellbound-StoryWriter-0.1-lora)
* [MrRobotoAI/231](https://huggingface.co/MrRobotoAI/231)
* [MrRobotoAI/229](https://huggingface.co/MrRobotoAI/229)
* [MrRobotoAI/218](https://huggingface.co/MrRobotoAI/218) + [hf-100/Llama-3.1-Spellbound-StoryWriter-0.1-lora](https://huggingface.co/hf-100/Llama-3.1-Spellbound-StoryWriter-0.1-lora)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: MrRobotoAI/93+hf-100/Llama-3.1-Spellbound-StoryWriter-0.1-lora # 13,552
- model: MrRobotoAI/155+hf-100/Llama-3.1-Spellbound-StoryWriter-0.1-lora # 13,658
- model: MrRobotoAI/218+hf-100/Llama-3.1-Spellbound-StoryWriter-0.1-lora # 14,970
- model: MrRobotoAI/227+hf-100/Llama-3.1-Spellbound-StoryWriter-0.1-lora
- model: MrRobotoAI/228
- model: MrRobotoAI/229
- model: MrRobotoAI/231
merge_method: model_stock
base_model: MrRobotoAI/228
normalize: true
dtype: float16
```
|
petrznel/plastic_can
|
petrznel
| 2025-03-03T15:29:59Z | 24 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-01-14T15:00:30Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
A plastic can containing supplements perched on a rocky cliff edge during
sunrise, with dramatic light reflecting off its metallic surface and a
distant runner preparing for a trail adventure.
output:
url: images/R8_FLUX_XLABS_00001_.webp
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# Plastic Can
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/petrznel/plastic_can/tree/main) them in the Files & versions tab.
|
Vaish1707/text2sql
|
Vaish1707
| 2025-03-03T15:28:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-03T15:27:54Z |
---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Vaish1707
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
zimka/ppo-Huggy
|
zimka
| 2025-03-03T15:27:27Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2025-03-03T15:27:23Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: zimka/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
markwil/cherakshin_style_LoRA
|
markwil
| 2025-03-03T15:26:57Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2025-03-03T15:26:49Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: photo collage in CHERKASHIN style
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - markwil/cherakshin_style_LoRA
<Gallery />
## Model description
These are markwil/cherakshin_style_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use photo collage in CHERKASHIN style to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](markwil/cherakshin_style_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
MrRobotoAI/B
|
MrRobotoAI
| 2025-03-03T15:26:53Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:MrRobotoAI/218",
"base_model:merge:MrRobotoAI/218",
"base_model:MrRobotoAI/227",
"base_model:merge:MrRobotoAI/227",
"base_model:MrRobotoAI/229",
"base_model:merge:MrRobotoAI/229",
"base_model:MrRobotoAI/231",
"base_model:merge:MrRobotoAI/231",
"base_model:MrRobotoAI/240",
"base_model:merge:MrRobotoAI/240",
"base_model:MrRobotoAI/Heimdall-v2.1-8b-MANCHESTER-WRITER-128K",
"base_model:merge:MrRobotoAI/Heimdall-v2.1-8b-MANCHESTER-WRITER-128K",
"base_model:MrRobotoAI/Hel-v4-8b-DARK-FICTION-128K",
"base_model:merge:MrRobotoAI/Hel-v4-8b-DARK-FICTION-128K",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-03T05:06:04Z |
---
base_model:
- MrRobotoAI/227
- MrRobotoAI/240
- MrRobotoAI/Hel-v4-8b-DARK-FICTION-128K
- MrRobotoAI/229
- MrRobotoAI/Heimdall-v2.1-8b-MANCHESTER-WRITER-128K
- MrRobotoAI/231
- MrRobotoAI/218
library_name: transformers
tags:
- mergekit
- merge
---
# merge 13,641
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [MrRobotoAI/Heimdall-v2.1-8b-MANCHESTER-WRITER-128K](https://huggingface.co/MrRobotoAI/Heimdall-v2.1-8b-MANCHESTER-WRITER-128K) as a base.
### Models Merged
The following models were included in the merge:
* [MrRobotoAI/227](https://huggingface.co/MrRobotoAI/227)
* [MrRobotoAI/240](https://huggingface.co/MrRobotoAI/240)
* [MrRobotoAI/Hel-v4-8b-DARK-FICTION-128K](https://huggingface.co/MrRobotoAI/Hel-v4-8b-DARK-FICTION-128K)
* [MrRobotoAI/229](https://huggingface.co/MrRobotoAI/229)
* [MrRobotoAI/231](https://huggingface.co/MrRobotoAI/231)
* [MrRobotoAI/218](https://huggingface.co/MrRobotoAI/218)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
models:
- model: MrRobotoAI/218 # 14,970
- model: MrRobotoAI/227
- model: MrRobotoAI/229
- model: MrRobotoAI/231
- model: MrRobotoAI/240
- model: MrRobotoAI/Hel-v4-8b-DARK-FICTION-128K
- model: MrRobotoAI/Heimdall-v2.1-8b-MANCHESTER-WRITER-128K
merge_method: model_stock
base_model: MrRobotoAI/Heimdall-v2.1-8b-MANCHESTER-WRITER-128K
normalize: true
dtype: float16
```
|
irishprancer/c97ecd2c-29f3-4648-a667-b8d178914db4
|
irishprancer
| 2025-03-03T15:25:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-03T13:41:10Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tomaarsen/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-listnet-identity
|
tomaarsen
| 2025-03-03T15:25:35Z | 11 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"cross-encoder",
"text-classification",
"generated_from_trainer",
"dataset_size:78704",
"loss:ListNetLoss",
"en",
"dataset:microsoft/ms_marco",
"arxiv:1908.10084",
"base_model:microsoft/MiniLM-L12-H384-uncased",
"base_model:finetune:microsoft/MiniLM-L12-H384-uncased",
"co2_eq_emissions",
"region:us"
] |
text-classification
| 2025-02-27T14:32:06Z |
---
language:
- en
tags:
- sentence-transformers
- cross-encoder
- text-classification
- generated_from_trainer
- dataset_size:78704
- loss:ListNetLoss
base_model: microsoft/MiniLM-L12-H384-uncased
datasets:
- microsoft/ms_marco
pipeline_tag: text-classification
library_name: sentence-transformers
metrics:
- map
- mrr@10
- ndcg@10
co2_eq_emissions:
emissions: 205.4804729340415
energy_consumed: 0.5286324046031189
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
ram_total_size: 31.777088165283203
hours_used: 1.686
hardware_used: 1 x NVIDIA GeForce RTX 3090
model-index:
- name: CrossEncoder based on microsoft/MiniLM-L12-H384-uncased
results: []
---
# CrossEncoder based on microsoft/MiniLM-L12-H384-uncased
This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco) dataset using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
## Model Details
### Model Description
- **Model Type:** Cross Encoder
- **Base model:** [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) <!-- at revision 44acabbec0ef496f6dbc93adadea57f376b7c0ec -->
- **Maximum Sequence Length:** 512 tokens
- **Number of Output Labels:** 1 label
- **Training Dataset:**
- [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import CrossEncoder
# Download from the 🤗 Hub
model = CrossEncoder("tomaarsen/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-listnet-identity")
# Get scores for pairs of texts
pairs = [
['How many calories in an egg', 'There are on average between 55 and 80 calories in an egg depending on its size.'],
['How many calories in an egg', 'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.'],
['How many calories in an egg', 'Most of the calories in an egg come from the yellow yolk in the center.'],
]
scores = model.predict(pairs)
print(scores.shape)
# (3,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'How many calories in an egg',
[
'There are on average between 55 and 80 calories in an egg depending on its size.',
'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.',
'Most of the calories in an egg come from the yellow yolk in the center.',
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Cross Encoder Reranking
* Datasets: `NanoMSMARCO`, `NanoNFCorpus` and `NanoNQ`
* Evaluated with [<code>CrossEncoderRerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderRerankingEvaluator)
| Metric | NanoMSMARCO | NanoNFCorpus | NanoNQ |
|:------------|:---------------------|:---------------------|:---------------------|
| map | 0.4847 (-0.0049) | 0.3325 (+0.0716) | 0.5967 (+0.1771) |
| mrr@10 | 0.4768 (-0.0007) | 0.5669 (+0.0670) | 0.6024 (+0.1757) |
| **ndcg@10** | **0.5573 (+0.0168)** | **0.3623 (+0.0373)** | **0.6499 (+0.1492)** |
#### Cross Encoder Nano BEIR
* Dataset: `NanoBEIR_R100_mean`
* Evaluated with [<code>CrossEncoderNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderNanoBEIREvaluator)
| Metric | Value |
|:------------|:---------------------|
| map | 0.4713 (+0.0813) |
| mrr@10 | 0.5487 (+0.0807) |
| **ndcg@10** | **0.5231 (+0.0678)** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### ms_marco
* Dataset: [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco) at [a47ee7a](https://huggingface.co/datasets/microsoft/ms_marco/tree/a47ee7aae8d7d466ba15f9f0bfac3b3681087b3a)
* Size: 78,704 training samples
* Columns: <code>query</code>, <code>docs</code>, and <code>labels</code>
* Approximate statistics based on the first 1000 samples:
| | query | docs | labels |
|:--------|:-----------------------------------------------------------------------------------------------|:------------------------------------|:------------------------------------|
| type | string | list | list |
| details | <ul><li>min: 10 characters</li><li>mean: 33.93 characters</li><li>max: 99 characters</li></ul> | <ul><li>size: 10 elements</li></ul> | <ul><li>size: 10 elements</li></ul> |
* Samples:
| query | docs | labels |
|:------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------|
| <code>what types of moons are there</code> | <code>["The different types of moons are: Full Wolf Moon, Full Snow Moon, Full Worm Moon, paschal full moon, full pink moon, full flower moon, full strawberry moon, full buck moon, … full sturgeon moon, full harvest moon, full hunters moon, full beaver moon, full cold moon. The solar eclipse, when the moon blocks the sun's light from hitting the earth-creating a temporary blackout on earth, can occur only at the time of New Moon, while the luna … r eclipse, when the earth blocks the sun's light from reflecting off the moon, can occur only at the time of Full Moon.", 'Types of Moons. Full Moon names date back to Native Americans, of what is now the northern and eastern United States. The tribes kept track of the seasons by giving distinctive names to each recurring full Moon. Their names were applied to the entire month in which each occurred. There was some variation in the Moon names, but in general, the same ones were current throughout the Algonquin tribes from New England to Lake Superio...</code> | <code>[1, 1, 1, 0, 0, ...]</code> |
| <code>what is beryllium commonly combined with</code> | <code>['Beryllium is an industrial metal with some attractive attributes. It’s lighter than aluminum and 6x stronger than steel. It’s usually combined with other metals and is a key component in the aerospace and electronics industries. Beryllium is also used in the production of nuclear weapons. With that, you may not be surprised to learn that beryllium is one of the most toxic elements in existence. Beryllium is a Class A EPA carcinogen and exposure can cause Chronic Beryllium Disease, an often fatal lung disease. ', 'Beryllium is found in about 30 different mineral species. The most important are beryl (beryllium aluminium silicate) and bertrandite (beryllium silicate). Emerald and aquamarine are precious forms of beryl. The metal is usually prepared by reducing beryllium fluoride with magnesium metal. Uses. Beryllium is used in alloys with copper or nickel to make gyroscopes, springs, electrical contacts, spot-welding electrodes and non-sparking tools. Mixing beryllium with these metals...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
| <code>is turkish coffee healthy</code> | <code>["Calories, Fat and Other Basics. A serving of Turkish coffee contains about 46 calories. Though the drink doesn't contain any fat, it also doesn't supply any fiber or protein, two key nutrients needed for good health. The coffee doesn't supply an impressive amount of calcium or iron either. A blend of strong coffee, sugar and cardamom, Turkish coffee is more of a sweet treat than something similar to a regular cup of coffee. While there are certain health benefits from the coffee and cardamom, sugar is a major drawback when it comes to the nutritional benefits of the drink", "A serving of Turkish coffee contains about 11.5 grams of sugar, which is equal to almost 3 teaspoons. That's half of the 6 teaspoons women should limit themselves to each day and one-third of the 9 teaspoons men should set as their daily upper limit, according to the American Heart Association. A blend of strong coffee, sugar and cardamom, Turkish coffee is more of a sweet treat than something similar to a regula...</code> | <code>[1, 1, 0, 0, 0, ...]</code> |
* Loss: [<code>ListNetLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#listnetloss) with these parameters:
```json
{
"eps": 1e-10,
"pad_value": -1,
"activation_fct": "torch.nn.modules.linear.Identity"
}
```
### Evaluation Dataset
#### ms_marco
* Dataset: [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco) at [a47ee7a](https://huggingface.co/datasets/microsoft/ms_marco/tree/a47ee7aae8d7d466ba15f9f0bfac3b3681087b3a)
* Size: 1,000 evaluation samples
* Columns: <code>query</code>, <code>docs</code>, and <code>labels</code>
* Approximate statistics based on the first 1000 samples:
| | query | docs | labels |
|:--------|:------------------------------------------------------------------------------------------------|:------------------------------------|:------------------------------------|
| type | string | list | list |
| details | <ul><li>min: 10 characters</li><li>mean: 33.81 characters</li><li>max: 110 characters</li></ul> | <ul><li>size: 10 elements</li></ul> | <ul><li>size: 10 elements</li></ul> |
* Samples:
| query | docs | labels |
|:-----------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------|
| <code>what is a fishy smell on humans</code> | <code>["Trimethylaminuria (TMAU), also known as fish odor syndrome or fish malodor syndrome, is a rare metabolic disorder where Trimethylamine is released in the person's sweat, urine, and breath, giving off a strong fishy odor or strong body odor. Body odor is generally considered to be an unpleasant odor among many human cultures.", "The trimethylamine is released in the person's sweat, urine, reproductive fluids, and breath, giving off a strong fishy or body odor. Some people with trimethylaminuria have a strong odor all the time, but most have a moderate smell that varies in intensity over time. Although FMO3 mutations account for most known cases of trimethylaminuria, some cases are caused by other factors. A fish-like body odor could result from an excess of certain proteins in the diet or from an increase in bacteria in the digestive system.", 'Trimethylaminuria is a disorder in which the body is unable to break down trimethylamine, a chemical compound that has a pungent odor. Trimeth...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
| <code>how to cut woodworking joints</code> | <code>['The tails and pins interlock to form a strong 90-degree joint. Dovetail joints are technically complex and are often used to create drawer boxes for furniture. Through mortise and tenon – To form this joint, a round or square hole (called a mortise) is cut through the side of one piece of wood. The end of the other piece of wood is cut to have a projection (the tenon) that matches the mortise. The tenon is placed into the mortise, projecting out from the other side of the wood. A wedge is hammered into a hole in the tenon. The wedge keeps the tenon from sliding out of the mortise.', "Wood joinery is simply the method by which two pieces of wood are connected. In many cases, the appearance of a joint becomes at least as important as it's strength. Wood joinery encompasses everything from intricate half-blind dovetails to connections that are simply nailed, glued or screwed. How to Use Biscuit Joints. Share. Doweling as a method of joinery is simple: a few dowels are glued into matchin...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
| <code>how long does it take to be a paramedic</code> | <code>['In Kansas, you first have to take an EMT course which is roughly 6 months long depending on where you take the course. Then you have to take A&P, English Comp, Sociology, Algebra, & Interpersonal Communication as pre-requisites for Paramedic. EMT is 110 hours which can be done in 3 weeks or dragged out for several months by just going one night per week to class. The Paramedic is 600 - 1200 hours in length depending on the state and averages about 6 - 9 months of training.', 'Coursework and training to become an EMT-basic or first responder can generally be completed in as little as three weeks on an accelerated basis. For part-time students, these programs may take around 8-11 weeks to complete. To become an EMT-intermediate 1985 or 1999, students generally must complete 30-350 hours of training. This training requirement varies according to the procedures the state allows these EMTs to perform.', 'How long does it take to be a paramedic depends on the area of study and the skill on...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
* Loss: [<code>ListNetLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#listnetloss) with these parameters:
```json
{
"eps": 1e-10,
"pad_value": -1,
"activation_fct": "torch.nn.modules.linear.Identity"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 6
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `warmup_ratio`: 0.1
- `seed`: 12
- `bf16`: True
- `load_best_model_at_end`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 6
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 12
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_ndcg@10 | NanoNFCorpus_ndcg@10 | NanoNQ_ndcg@10 | NanoBEIR_R100_mean_ndcg@10 |
|:----------:|:---------:|:-------------:|:---------------:|:--------------------:|:--------------------:|:--------------------:|:--------------------------:|
| -1 | -1 | - | - | 0.0155 (-0.5250) | 0.3609 (+0.0358) | 0.0410 (-0.4597) | 0.1391 (-0.3163) |
| 0.0001 | 1 | 2.1559 | - | - | - | - | - |
| 0.0762 | 1000 | 2.0862 | - | - | - | - | - |
| 0.1525 | 2000 | 2.0787 | - | - | - | - | - |
| 0.2287 | 3000 | 2.0785 | - | - | - | - | - |
| 0.3049 | 4000 | 2.0738 | 2.0755 | 0.5129 (-0.0276) | 0.3371 (+0.0120) | 0.5561 (+0.0555) | 0.4687 (+0.0133) |
| 0.3812 | 5000 | 2.0828 | - | - | - | - | - |
| 0.4574 | 6000 | 2.0711 | - | - | - | - | - |
| 0.5336 | 7000 | 2.072 | - | - | - | - | - |
| 0.6098 | 8000 | 2.0721 | 2.0734 | 0.5627 (+0.0222) | 0.3547 (+0.0296) | 0.5691 (+0.0684) | 0.4955 (+0.0401) |
| 0.6861 | 9000 | 2.0714 | - | - | - | - | - |
| 0.7623 | 10000 | 2.0744 | - | - | - | - | - |
| 0.8385 | 11000 | 2.0708 | - | - | - | - | - |
| **0.9148** | **12000** | **2.0705** | **2.0732** | **0.5573 (+0.0168)** | **0.3623 (+0.0373)** | **0.6499 (+0.1492)** | **0.5231 (+0.0678)** |
| 0.9910 | 13000 | 2.0721 | - | - | - | - | - |
| 1.0672 | 14000 | 2.065 | - | - | - | - | - |
| 1.1435 | 15000 | 2.0732 | - | - | - | - | - |
| 1.2197 | 16000 | 2.07 | 2.0729 | 0.5673 (+0.0269) | 0.3563 (+0.0312) | 0.5877 (+0.0870) | 0.5038 (+0.0484) |
| 1.2959 | 17000 | 2.0707 | - | - | - | - | - |
| 1.3722 | 18000 | 2.0719 | - | - | - | - | - |
| 1.4484 | 19000 | 2.0687 | - | - | - | - | - |
| 1.5246 | 20000 | 2.0675 | 2.0730 | 0.5633 (+0.0228) | 0.3264 (+0.0014) | 0.5949 (+0.0943) | 0.4949 (+0.0395) |
| 1.6009 | 21000 | 2.0698 | - | - | - | - | - |
| 1.6771 | 22000 | 2.0685 | - | - | - | - | - |
| 1.7533 | 23000 | 2.0683 | - | - | - | - | - |
| 1.8295 | 24000 | 2.0667 | 2.0731 | 0.5571 (+0.0166) | 0.3521 (+0.0271) | 0.6319 (+0.1313) | 0.5137 (+0.0583) |
| 1.9058 | 25000 | 2.0665 | - | - | - | - | - |
| 1.9820 | 26000 | 2.0707 | - | - | - | - | - |
| 2.0582 | 27000 | 2.0663 | - | - | - | - | - |
| 2.1345 | 28000 | 2.0672 | 2.0739 | 0.5543 (+0.0139) | 0.3346 (+0.0096) | 0.5958 (+0.0952) | 0.4949 (+0.0395) |
| 2.2107 | 29000 | 2.0661 | - | - | - | - | - |
| 2.2869 | 30000 | 2.0681 | - | - | - | - | - |
| 2.3632 | 31000 | 2.0626 | - | - | - | - | - |
| 2.4394 | 32000 | 2.0642 | 2.0745 | 0.5791 (+0.0387) | 0.3347 (+0.0097) | 0.6386 (+0.1380) | 0.5175 (+0.0621) |
| 2.5156 | 33000 | 2.0635 | - | - | - | - | - |
| 2.5919 | 34000 | 2.0648 | - | - | - | - | - |
| 2.6681 | 35000 | 2.0615 | - | - | - | - | - |
| 2.7443 | 36000 | 2.0626 | 2.0736 | 0.5735 (+0.0331) | 0.3288 (+0.0038) | 0.6205 (+0.1198) | 0.5076 (+0.0522) |
| 2.8206 | 37000 | 2.0621 | - | - | - | - | - |
| 2.8968 | 38000 | 2.0664 | - | - | - | - | - |
| 2.9730 | 39000 | 2.0621 | - | - | - | - | - |
| -1 | -1 | - | - | 0.5573 (+0.0168) | 0.3623 (+0.0373) | 0.6499 (+0.1492) | 0.5231 (+0.0678) |
* The bold row denotes the saved checkpoint.
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Energy Consumed**: 0.529 kWh
- **Carbon Emitted**: 0.205 kg of CO2
- **Hours Used**: 1.686 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 3.5.0.dev0
- Transformers: 4.48.3
- PyTorch: 2.5.0+cu121
- Accelerate: 1.4.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### ListNetLoss
```bibtex
@inproceedings{cao2007learning,
title={Learning to rank: from pairwise approach to listwise approach},
author={Cao, Zhe and Qin, Tao and Liu, Tie-Yan and Tsai, Ming-Feng and Li, Hang},
booktitle={Proceedings of the 24th international conference on Machine learning},
pages={129--136},
year={2007}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
theraSara/Taxi-v3
|
theraSara
| 2025-03-03T15:24:46Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-03-03T15:24:45Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.76
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="theraSara/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
rootonchair/InternVL2_5-4B-AWQ
|
rootonchair
| 2025-03-03T15:24:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"internvl_chat",
"feature-extraction",
"internvl",
"custom_code",
"awq",
"image-text-to-text",
"conversational",
"multilingual",
"base_model:OpenGVLab/InternVL2_5-4B",
"base_model:quantized:OpenGVLab/InternVL2_5-4B",
"license:mit",
"4-bit",
"region:us"
] |
image-text-to-text
| 2025-03-03T14:54:01Z |
---
license: mit
pipeline_tag: image-text-to-text
library_name: transformers
base_model: OpenGVLab/InternVL2_5-4B
base_model_relation: quantized
language:
- multilingual
tags:
- internvl
- custom_code
- awq
---
# InternVL2_5-4B-AWQ
## Introduction
This is an AWQ quantization of InternVL2_5-4B using `autoawq`
## Benchmark
| Model Name | MMBench_DEV_EN | OCRBench |
|:----------:|:--------------:|:--------:|
| OpenGVLab/InternVL2_5-4B | 82.56 | 82.8 |
| InternVL2_5-4B-AWQ | 82.3 | 80.5 |
## Quick Start
We provide an example code to run `InternVL2_5-4B-AWQ` using `transformers`.
> Please use transformers>=4.37.2 to ensure the model works normally.
### Model Loading
#### 16-bit (bf16 / fp16)
```python
import torch
from transformers import AutoTokenizer, AutoModel
path = "rootonchair/InternVL2_5-4B-AWQ"
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
use_flash_attn=True,
trust_remote_code=True).eval().cuda()
```
#### BNB 8-bit Quantization
```python
import torch
from transformers import AutoTokenizer, AutoModel
path = "rootonchair/InternVL2_5-4B-AWQ"
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.float16,
load_in_8bit=True,
low_cpu_mem_usage=True,
use_flash_attn=True,
trust_remote_code=True).eval()
```
#### Multiple GPUs
The reason for writing the code this way is to avoid errors that occur during multi-GPU inference due to tensors not being on the same device. By ensuring that the first and last layers of the large language model (LLM) are on the same device, we prevent such errors.
```python
import math
import torch
from transformers import AutoTokenizer, AutoModel
def split_model(model_name):
device_map = {}
world_size = torch.cuda.device_count()
num_layers = {
'InternVL2_5-1B': 24, 'InternVL2_5-2B': 24, 'InternVL2_5-4B': 36, 'InternVL2_5-8B': 32,
'InternVL2_5-26B': 48, 'InternVL2_5-38B': 64, 'InternVL2_5-78B': 80}[model_name]
# Since the first GPU will be used for ViT, treat it as half a GPU.
num_layers_per_gpu = math.ceil(num_layers / (world_size - 0.5))
num_layers_per_gpu = [num_layers_per_gpu] * world_size
num_layers_per_gpu[0] = math.ceil(num_layers_per_gpu[0] * 0.5)
layer_cnt = 0
for i, num_layer in enumerate(num_layers_per_gpu):
for j in range(num_layer):
device_map[f'language_model.model.layers.{layer_cnt}'] = i
layer_cnt += 1
device_map['vision_model'] = 0
device_map['mlp1'] = 0
device_map['language_model.model.tok_embeddings'] = 0
device_map['language_model.model.embed_tokens'] = 0
device_map['language_model.output'] = 0
device_map['language_model.model.norm'] = 0
device_map['language_model.model.rotary_emb'] = 0
device_map['language_model.lm_head'] = 0
device_map[f'language_model.model.layers.{num_layers - 1}'] = 0
return device_map
path = "rootonchair/InternVL2_5-4B-AWQ"
device_map = split_model('InternVL2_5-4B')
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
use_flash_attn=True,
trust_remote_code=True,
device_map=device_map).eval()
```
### Inference with Transformers
```python
import numpy as np
import torch
import torchvision.transforms as T
from decord import VideoReader, cpu
from PIL import Image
from torchvision.transforms.functional import InterpolationMode
from transformers import AutoModel, AutoTokenizer
IMAGENET_MEAN = (0.485, 0.456, 0.406)
IMAGENET_STD = (0.229, 0.224, 0.225)
def build_transform(input_size):
MEAN, STD = IMAGENET_MEAN, IMAGENET_STD
transform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
T.ToTensor(),
T.Normalize(mean=MEAN, std=STD)
])
return transform
def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
best_ratio_diff = float('inf')
best_ratio = (1, 1)
area = width * height
for ratio in target_ratios:
target_aspect_ratio = ratio[0] / ratio[1]
ratio_diff = abs(aspect_ratio - target_aspect_ratio)
if ratio_diff < best_ratio_diff:
best_ratio_diff = ratio_diff
best_ratio = ratio
elif ratio_diff == best_ratio_diff:
if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:
best_ratio = ratio
return best_ratio
def dynamic_preprocess(image, min_num=1, max_num=12, image_size=448, use_thumbnail=False):
orig_width, orig_height = image.size
aspect_ratio = orig_width / orig_height
# calculate the existing image aspect ratio
target_ratios = set(
(i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
i * j <= max_num and i * j >= min_num)
target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
# find the closest aspect ratio to the target
target_aspect_ratio = find_closest_aspect_ratio(
aspect_ratio, target_ratios, orig_width, orig_height, image_size)
# calculate the target width and height
target_width = image_size * target_aspect_ratio[0]
target_height = image_size * target_aspect_ratio[1]
blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
# resize the image
resized_img = image.resize((target_width, target_height))
processed_images = []
for i in range(blocks):
box = (
(i % (target_width // image_size)) * image_size,
(i // (target_width // image_size)) * image_size,
((i % (target_width // image_size)) + 1) * image_size,
((i // (target_width // image_size)) + 1) * image_size
)
# split the image
split_img = resized_img.crop(box)
processed_images.append(split_img)
assert len(processed_images) == blocks
if use_thumbnail and len(processed_images) != 1:
thumbnail_img = image.resize((image_size, image_size))
processed_images.append(thumbnail_img)
return processed_images
def load_image(image_file, input_size=448, max_num=12):
image = Image.open(image_file).convert('RGB')
transform = build_transform(input_size=input_size)
images = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num)
pixel_values = [transform(image) for image in images]
pixel_values = torch.stack(pixel_values)
return pixel_values
# If you want to load a model using multiple GPUs, please refer to the `Multiple GPUs` section.
path = 'rootonchair/InternVL2_5-4B-AWQ'
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
use_flash_attn=True,
trust_remote_code=True).eval().cuda()
tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast=False)
# set the max number of tiles in `max_num`
pixel_values = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
generation_config = dict(max_new_tokens=1024, do_sample=True)
# pure-text conversation (纯文本对话)
question = 'Hello, who are you?'
response, history = model.chat(tokenizer, None, question, generation_config, history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'Can you tell me a story?'
response, history = model.chat(tokenizer, None, question, generation_config, history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# single-image single-round conversation (单图单轮对话)
question = '<image>\nPlease describe the image shortly.'
response = model.chat(tokenizer, pixel_values, question, generation_config)
print(f'User: {question}\nAssistant: {response}')
# single-image multi-round conversation (单图多轮对话)
question = '<image>\nPlease describe the image in detail.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'Please write a poem according to the image.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# multi-image multi-round conversation, combined images (多图多轮对话,拼接图像)
pixel_values1 = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values2 = load_image('./examples/image2.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
question = '<image>\nDescribe the two images in detail.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'What are the similarities and differences between these two images.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# multi-image multi-round conversation, separate images (多图多轮对话,独立图像)
pixel_values1 = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values2 = load_image('./examples/image2.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
num_patches_list = [pixel_values1.size(0), pixel_values2.size(0)]
question = 'Image-1: <image>\nImage-2: <image>\nDescribe the two images in detail.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list,
history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'What are the similarities and differences between these two images.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list,
history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# batch inference, single image per sample (单图批处理)
pixel_values1 = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values2 = load_image('./examples/image2.jpg', max_num=12).to(torch.bfloat16).cuda()
num_patches_list = [pixel_values1.size(0), pixel_values2.size(0)]
pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
questions = ['<image>\nDescribe the image in detail.'] * len(num_patches_list)
responses = model.batch_chat(tokenizer, pixel_values,
num_patches_list=num_patches_list,
questions=questions,
generation_config=generation_config)
for question, response in zip(questions, responses):
print(f'User: {question}\nAssistant: {response}')
# video multi-round conversation (视频多轮对话)
def get_index(bound, fps, max_frame, first_idx=0, num_segments=32):
if bound:
start, end = bound[0], bound[1]
else:
start, end = -100000, 100000
start_idx = max(first_idx, round(start * fps))
end_idx = min(round(end * fps), max_frame)
seg_size = float(end_idx - start_idx) / num_segments
frame_indices = np.array([
int(start_idx + (seg_size / 2) + np.round(seg_size * idx))
for idx in range(num_segments)
])
return frame_indices
def load_video(video_path, bound=None, input_size=448, max_num=1, num_segments=32):
vr = VideoReader(video_path, ctx=cpu(0), num_threads=1)
max_frame = len(vr) - 1
fps = float(vr.get_avg_fps())
pixel_values_list, num_patches_list = [], []
transform = build_transform(input_size=input_size)
frame_indices = get_index(bound, fps, max_frame, first_idx=0, num_segments=num_segments)
for frame_index in frame_indices:
img = Image.fromarray(vr[frame_index].asnumpy()).convert('RGB')
img = dynamic_preprocess(img, image_size=input_size, use_thumbnail=True, max_num=max_num)
pixel_values = [transform(tile) for tile in img]
pixel_values = torch.stack(pixel_values)
num_patches_list.append(pixel_values.shape[0])
pixel_values_list.append(pixel_values)
pixel_values = torch.cat(pixel_values_list)
return pixel_values, num_patches_list
video_path = './examples/red-panda.mp4'
pixel_values, num_patches_list = load_video(video_path, num_segments=8, max_num=1)
pixel_values = pixel_values.to(torch.float16).cuda()
video_prefix = ''.join([f'Frame{i+1}: <image>\n' for i in range(len(num_patches_list))])
question = video_prefix + 'What is the red panda doing?'
# Frame1: <image>\nFrame2: <image>\n...\nFrame8: <image>\n{question}
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list, history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'Describe this video in detail.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list, history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
```
|
MrRobotoAI/C
|
MrRobotoAI
| 2025-03-03T15:24:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:MrRobotoAI/218",
"base_model:merge:MrRobotoAI/218",
"base_model:MrRobotoAI/227",
"base_model:merge:MrRobotoAI/227",
"base_model:MrRobotoAI/229",
"base_model:merge:MrRobotoAI/229",
"base_model:MrRobotoAI/231",
"base_model:merge:MrRobotoAI/231",
"base_model:MrRobotoAI/240",
"base_model:merge:MrRobotoAI/240",
"base_model:MrRobotoAI/B",
"base_model:merge:MrRobotoAI/B",
"base_model:MrRobotoAI/Odin-v2-8b-NOVELIST-128K",
"base_model:merge:MrRobotoAI/Odin-v2-8b-NOVELIST-128K",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-03T06:00:16Z |
---
base_model:
- MrRobotoAI/229
- MrRobotoAI/231
- MrRobotoAI/218
- MrRobotoAI/Odin-v2-8b-NOVELIST-128K
- MrRobotoAI/240
- MrRobotoAI/242
- MrRobotoAI/227
library_name: transformers
tags:
- mergekit
- merge
---
# merge 8,992
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [MrRobotoAI/Odin-v2-8b-NOVELIST-128K](https://huggingface.co/MrRobotoAI/Odin-v2-8b-NOVELIST-128K) as a base.
### Models Merged
The following models were included in the merge:
* [MrRobotoAI/229](https://huggingface.co/MrRobotoAI/229)
* [MrRobotoAI/231](https://huggingface.co/MrRobotoAI/231)
* [MrRobotoAI/218](https://huggingface.co/MrRobotoAI/218)
* [MrRobotoAI/240](https://huggingface.co/MrRobotoAI/240)
* [MrRobotoAI/242](https://huggingface.co/MrRobotoAI/242)
* [MrRobotoAI/227](https://huggingface.co/MrRobotoAI/227)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: MrRobotoAI/218 # 14,970
- model: MrRobotoAI/227
- model: MrRobotoAI/229
- model: MrRobotoAI/231
- model: MrRobotoAI/240
- model: MrRobotoAI/242 #13,641
- model: MrRobotoAI/Odin-v2-8b-NOVELIST-128K
merge_method: model_stock
base_model: MrRobotoAI/Odin-v2-8b-NOVELIST-128K
normalize: true
dtype: float16
```
|
lesso10/2b24fe79-14fe-48f1-9b7d-ee2764998adb
|
lesso10
| 2025-03-03T15:23:43Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-2b",
"base_model:adapter:unsloth/gemma-2-2b",
"license:gemma",
"region:us"
] | null | 2025-03-03T12:16:57Z |
---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-2b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2b24fe79-14fe-48f1-9b7d-ee2764998adb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# 2b24fe79-14fe-48f1-9b7d-ee2764998adb
This model is a fine-tuned version of [unsloth/gemma-2-2b](https://huggingface.co/unsloth/gemma-2-2b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4791
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00021
- train_batch_size: 4
- eval_batch_size: 4
- seed: 100
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0006 | 1 | 2.5849 |
| 0.5222 | 0.3135 | 500 | 0.4791 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso09/26920111-924d-4d5e-8efc-15d45c482a1f
|
lesso09
| 2025-03-03T15:23:18Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-2b",
"base_model:adapter:unsloth/gemma-2-2b",
"license:gemma",
"region:us"
] | null | 2025-03-03T12:16:49Z |
---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-2b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 26920111-924d-4d5e-8efc-15d45c482a1f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# 26920111-924d-4d5e-8efc-15d45c482a1f
This model is a fine-tuned version of [unsloth/gemma-2-2b](https://huggingface.co/unsloth/gemma-2-2b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4789
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000209
- train_batch_size: 4
- eval_batch_size: 4
- seed: 90
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0006 | 1 | 2.5927 |
| 0.5003 | 0.3135 | 500 | 0.4789 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso04/584f06db-94f2-42c3-a4d3-e1b3a5b14dbb
|
lesso04
| 2025-03-03T15:21:42Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:openlm-research/open_llama_3b",
"base_model:adapter:openlm-research/open_llama_3b",
"license:apache-2.0",
"region:us"
] | null | 2025-03-03T13:56:25Z |
---
library_name: peft
license: apache-2.0
base_model: openlm-research/open_llama_3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 584f06db-94f2-42c3-a4d3-e1b3a5b14dbb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# 584f06db-94f2-42c3-a4d3-e1b3a5b14dbb
This model is a fine-tuned version of [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4129
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000204
- train_batch_size: 4
- eval_batch_size: 4
- seed: 40
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0007 | 1 | 0.9570 |
| 0.4086 | 0.3344 | 500 | 0.4129 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso03/9085010b-0e03-4485-b832-2c2c9cb03c28
|
lesso03
| 2025-03-03T15:21:18Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct",
"base_model:adapter:aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct",
"license:llama3",
"region:us"
] | null | 2025-03-03T12:36:49Z |
---
library_name: peft
license: llama3
base_model: aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9085010b-0e03-4485-b832-2c2c9cb03c28
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# 9085010b-0e03-4485-b832-2c2c9cb03c28
This model is a fine-tuned version of [aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct](https://huggingface.co/aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4491
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000203
- train_batch_size: 4
- eval_batch_size: 4
- seed: 30
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 2.2606 |
| 1.4444 | 0.0826 | 500 | 1.4491 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
lesso02/59f3097d-5df4-40e6-b746-cf88f9649523
|
lesso02
| 2025-03-03T15:21:02Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Mistral-Nemo-Instruct-2407",
"base_model:adapter:unsloth/Mistral-Nemo-Instruct-2407",
"license:apache-2.0",
"region:us"
] | null | 2025-03-03T11:34:16Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Mistral-Nemo-Instruct-2407
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 59f3097d-5df4-40e6-b746-cf88f9649523
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# 59f3097d-5df4-40e6-b746-cf88f9649523
This model is a fine-tuned version of [unsloth/Mistral-Nemo-Instruct-2407](https://huggingface.co/unsloth/Mistral-Nemo-Instruct-2407) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0405
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000202
- train_batch_size: 4
- eval_batch_size: 4
- seed: 20
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 2.4249 |
| 2.1031 | 0.0064 | 500 | 1.0405 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
ogizhelev/european_carplates
|
ogizhelev
| 2025-03-03T15:20:24Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-03-03T15:20:24Z |
---
license: apache-2.0
---
|
featherless-ai-quants/huihui-ai-Qwen2.5-7B-Instruct-abliterated-GGUF
|
featherless-ai-quants
| 2025-03-03T15:19:39Z | 0 | 0 | null |
[
"gguf",
"text-generation",
"base_model:huihui-ai/Qwen2.5-7B-Instruct-abliterated",
"base_model:quantized:huihui-ai/Qwen2.5-7B-Instruct-abliterated",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-03-03T15:12:58Z |
---
base_model: huihui-ai/Qwen2.5-7B-Instruct-abliterated
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# huihui-ai/Qwen2.5-7B-Instruct-abliterated GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [huihui-ai-Qwen2.5-7B-Instruct-abliterated-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/huihui-ai-Qwen2.5-7B-Instruct-abliterated-GGUF/blob/main/huihui-ai-Qwen2.5-7B-Instruct-abliterated-IQ4_XS.gguf) | 4053.40 MB |
| Q2_K | [huihui-ai-Qwen2.5-7B-Instruct-abliterated-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/huihui-ai-Qwen2.5-7B-Instruct-abliterated-GGUF/blob/main/huihui-ai-Qwen2.5-7B-Instruct-abliterated-Q2_K.gguf) | 2876.23 MB |
| Q3_K_L | [huihui-ai-Qwen2.5-7B-Instruct-abliterated-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/huihui-ai-Qwen2.5-7B-Instruct-abliterated-GGUF/blob/main/huihui-ai-Qwen2.5-7B-Instruct-abliterated-Q3_K_L.gguf) | 3899.06 MB |
| Q3_K_M | [huihui-ai-Qwen2.5-7B-Instruct-abliterated-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/huihui-ai-Qwen2.5-7B-Instruct-abliterated-GGUF/blob/main/huihui-ai-Qwen2.5-7B-Instruct-abliterated-Q3_K_M.gguf) | 3631.97 MB |
| Q3_K_S | [huihui-ai-Qwen2.5-7B-Instruct-abliterated-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/huihui-ai-Qwen2.5-7B-Instruct-abliterated-GGUF/blob/main/huihui-ai-Qwen2.5-7B-Instruct-abliterated-Q3_K_S.gguf) | 3330.58 MB |
| Q4_K_M | [huihui-ai-Qwen2.5-7B-Instruct-abliterated-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/huihui-ai-Qwen2.5-7B-Instruct-abliterated-GGUF/blob/main/huihui-ai-Qwen2.5-7B-Instruct-abliterated-Q4_K_M.gguf) | 4466.13 MB |
| Q4_K_S | [huihui-ai-Qwen2.5-7B-Instruct-abliterated-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/huihui-ai-Qwen2.5-7B-Instruct-abliterated-GGUF/blob/main/huihui-ai-Qwen2.5-7B-Instruct-abliterated-Q4_K_S.gguf) | 4251.26 MB |
| Q5_K_M | [huihui-ai-Qwen2.5-7B-Instruct-abliterated-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/huihui-ai-Qwen2.5-7B-Instruct-abliterated-GGUF/blob/main/huihui-ai-Qwen2.5-7B-Instruct-abliterated-Q5_K_M.gguf) | 5192.60 MB |
| Q5_K_S | [huihui-ai-Qwen2.5-7B-Instruct-abliterated-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/huihui-ai-Qwen2.5-7B-Instruct-abliterated-GGUF/blob/main/huihui-ai-Qwen2.5-7B-Instruct-abliterated-Q5_K_S.gguf) | 5068.95 MB |
| Q6_K | [huihui-ai-Qwen2.5-7B-Instruct-abliterated-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/huihui-ai-Qwen2.5-7B-Instruct-abliterated-GGUF/blob/main/huihui-ai-Qwen2.5-7B-Instruct-abliterated-Q6_K.gguf) | 5964.47 MB |
| Q8_0 | [huihui-ai-Qwen2.5-7B-Instruct-abliterated-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/huihui-ai-Qwen2.5-7B-Instruct-abliterated-GGUF/blob/main/huihui-ai-Qwen2.5-7B-Instruct-abliterated-Q8_0.gguf) | 7723.36 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
ivolegrey/Sci-fi_Sketch_Style_SD1.5
|
ivolegrey
| 2025-03-03T15:16:17Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:adapter:stable-diffusion-v1-5/stable-diffusion-v1-5",
"license:mit",
"region:us"
] |
text-to-image
| 2025-03-03T15:15:55Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
masterpiece, high quality, best quality, rough sketch, messy line art,
monochromatic, 1girl, full body, shorter black hair, beautiful face, pretty
eyes, smile, blush, black v-shirt, cleavage, large breasts, simple white
background
parameters:
negative_prompt: bad anatomy, bad hands, ugly, text, watermark, worst quality, bad quality
output:
url: images/Sci-fi_Sketch_SD1.5_Upscaled_00008_.png
- text: >-
masterpiece, high quality, best quality, rough sketch, messy line art,
monochromatic, 1girl, full body, shorter black hair, beautiful face, pretty
eyes, smile, blush, black v-shirt, cleavage, large breasts, simple white
background
parameters:
negative_prompt: bad anatomy, bad hands, ugly, text, watermark, worst quality, bad quality
output:
url: images/Sci-fi_Sketch_SD1.5_Upscaled_00019_.png
- text: >-
masterpiece, high quality, best quality, rough sketch, messy line art,
monochromatic, futuristic city, gigantic structures, intricate architecture,
bridges, roads, small buildings, bright sky
parameters:
negative_prompt: bad anatomy, bad hands, ugly, text, watermark, worst quality, bad quality
output:
url: images/Sci-fi_Sketch_SD1.5_Upscaled_00027_.png
- text: >-
masterpiece, high quality, best quality, rough sketch, messy line art,
monochromatic, futuristic city, gigantic structures, intricate architecture,
bridges, roads, small buildings, bright sky
parameters:
negative_prompt: bad anatomy, bad hands, ugly, text, watermark, worst quality, bad quality
output:
url: images/Sci-fi_Sketch_SD1.5_Upscaled_00032_.png
- text: >-
masterpiece, high quality, best quality, rough sketch, messy line art,
monochromatic, futuristic city, gigantic structures, intricate architecture,
bridges, roads, small buildings, bright sky
parameters:
negative_prompt: bad anatomy, bad hands, ugly, text, watermark, worst quality, bad quality
output:
url: images/Sci-fi_Sketch_SD1.5_Upscaled_00038_.png
- text: >-
masterpiece, high quality, best quality, rough sketch, messy line art,
monochromatic, overgrown ruins, beautiful architecture, mangrove swamp,
gigantic trees, river, dense vegetation, fog
parameters:
negative_prompt: bad anatomy, bad hands, ugly, text, watermark, worst quality, bad quality
output:
url: images/Sci-fi_Sketch_SD1.5_Upscaled_00040_.png
- text: >-
masterpiece, high quality, best quality, rough sketch, messy line art,
monochromatic, overgrown ruins, beautiful architecture, mangrove swamp,
gigantic trees, river, dense vegetation, fog
parameters:
negative_prompt: bad anatomy, bad hands, ugly, text, watermark, worst quality, bad quality
output:
url: images/Sci-fi_Sketch_SD1.5_Upscaled_00041_.png
- text: >-
masterpiece, high quality, best quality, rough sketch, messy line art,
monochromatic, overgrown ruins, beautiful architecture, mangrove swamp,
gigantic trees, river, dense vegetation, fog
parameters:
negative_prompt: bad anatomy, bad hands, ugly, text, watermark, worst quality, bad quality
output:
url: images/Sci-fi_Sketch_SD1.5_Upscaled_00046_.png
- text: >-
masterpiece, high quality, best quality, rough sketch, messy line art,
monochromatic, futuristic metal ruins, barren landscape, outer space, black
sky, stars, planet
parameters:
negative_prompt: bad anatomy, bad hands, ugly, text, watermark, worst quality, bad quality
output:
url: images/Sci-fi_Sketch_SD1.5_Upscaled_00060_.png
- text: >-
masterpiece, high quality, best quality, rough sketch, messy line art,
monochromatic, futuristic metal ruins, barren landscape, outer space, black
sky, stars, planet
parameters:
negative_prompt: bad anatomy, bad hands, ugly, text, watermark, worst quality, bad quality
output:
url: images/Sci-fi_Sketch_SD1.5_Upscaled_00062_.png
- text: >-
masterpiece, high quality, best quality, rough sketch, messy line art,
1girl, brown hair, ponytail, perfect body, large breasts, pink bikini,
futuristic city, beach, tropical, palm trees, sunset, epic clouds
parameters:
negative_prompt: bad anatomy, bad hands, ugly, text, watermark, worst quality, bad quality
output:
url: images/Sci-fi_Sketch_SD1.5_Upscaled_00079_.png
- text: >-
masterpiece, high quality, best quality, rough sketch, messy line art,
1girl, brown hair, ponytail, perfect body, large breasts, pink bikini,
futuristic city, beach, tropical, palm trees, sunset, epic clouds
parameters:
negative_prompt: bad anatomy, bad hands, ugly, text, watermark, worst quality, bad quality
output:
url: images/Sci-fi_Sketch_SD1.5_Upscaled_00085_.png
base_model: stable-diffusion-v1-5/stable-diffusion-v1-5
instance_prompt: null
license: mit
---
# Sci-fi Sketch Style SD1.5
<Gallery />
## Model description
This LoRA is designed to produce a rough pen sketch style, while also being able to generate futuristic places, natural environments, space, horrifying monsters, giant mechas and aesthetic people.
# Usage
There is no trigger word, but "rough sketch", "monochromatic/desaturated", " messy line art" or "flat color" and "sci-fi" can help a lot.
The training was done on CivitAI and trained with over 500 images generated using Microsoft's Copilot Designer and auto generated captions with a few manual adjustments.
## Download model
Weights for this model are available in Safetensors format.
[Download](/ivolegrey/Sci-fi_Sketch_Style_SD1.5/tree/main) them in the Files & versions tab.
|
d-rang-d/MS3-RP-Broth-24B
|
d-rang-d
| 2025-03-03T15:15:30Z | 0 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"en",
"base_model:ArliAI/Mistral-Small-24B-ArliAI-RPMax-v1.4",
"base_model:merge:ArliAI/Mistral-Small-24B-ArliAI-RPMax-v1.4",
"base_model:PocketDoc/Dans-DangerousWinds-V1.1.1-24b",
"base_model:merge:PocketDoc/Dans-DangerousWinds-V1.1.1-24b",
"base_model:ReadyArt/Forgotten-Safeword-24B-V2.2",
"base_model:merge:ReadyArt/Forgotten-Safeword-24B-V2.2",
"base_model:TheDrummer/Cydonia-24B-v2",
"base_model:merge:TheDrummer/Cydonia-24B-v2",
"base_model:ToastyPigeon/ms3-roselily-rp-v2",
"base_model:merge:ToastyPigeon/ms3-roselily-rp-v2",
"base_model:estrogen/MS2501-24b-Ink-apollo-ep2",
"base_model:merge:estrogen/MS2501-24b-Ink-apollo-ep2",
"base_model:huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated",
"base_model:merge:huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated",
"base_model:trashpanda-org/Llama3-24B-Mullein-v1",
"base_model:merge:trashpanda-org/Llama3-24B-Mullein-v1",
"base_model:trashpanda-org/MS-24B-Instruct-Mullein-v0",
"base_model:merge:trashpanda-org/MS-24B-Instruct-Mullein-v0",
"base_model:unsloth/Mistral-Small-24B-Base-2501",
"base_model:merge:unsloth/Mistral-Small-24B-Base-2501",
"base_model:unsloth/Mistral-Small-24B-Instruct-2501",
"base_model:merge:unsloth/Mistral-Small-24B-Instruct-2501",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-02T22:48:58Z |
---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- mergekit
- merge
base_model:
- unsloth/Mistral-Small-24B-Base-2501
- unsloth/Mistral-Small-24B-Instruct-2501
- trashpanda-org/MS-24B-Instruct-Mullein-v0
- trashpanda-org/Llama3-24B-Mullein-v1
- ArliAI/Mistral-Small-24B-ArliAI-RPMax-v1.4
- TheDrummer/Cydonia-24B-v2
- estrogen/MS2501-24b-Ink-apollo-ep2
- huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated
- ToastyPigeon/ms3-roselily-rp-v2
- PocketDoc/Dans-DangerousWinds-V1.1.1-24b
- ReadyArt/Forgotten-Safeword-24B-V2.2
---
***
### Overview
One of the merging steps for [Tantum](https://huggingface.co/Nohobby/MS3-Tantum-24B-v0.1). Might be better than the end result
## Model files may not be downloadable
You can get full-weight files from here: https://huggingface.co/mergekit-community/MS-RP-whole
This happened because I was using the mergekit-gui space for merging and got lazy about manually dragging the intermediate steps to my org, so I just set it to upload to mergekit-community. When I learned that this thing was usable on it's own, I decided to add some info to the model card and duplicated the repo here before linking it in the Tantum readme file.
yeah
**Settings:**
Samplers: [Weird preset](https://files.catbox.moe/ccwmca.json) | [Forgotten-Safeword preset](https://huggingface.co/sleepdeprived3/Mistral-V7-Tekken-Extra-Dry)
Prompt format: Mistral-V7-Tekken (?)
I use [this](https://files.catbox.moe/daluze.json) lorebook for all chats instead of a system prompt for mistal models.
### Quants
[Static](https://huggingface.co/mradermacher/MS-RP-whole-GGUF) | [Imatrix](https://huggingface.co/mradermacher/MS-RP-whole-i1-GGUF)
***
## Merge Details
### Merging steps
## MS3-test-Merge-1
```yaml
models:
- model: unsloth/Mistral-Small-24B-Base-2501
- model: unsloth/Mistral-Small-24B-Instruct-2501+ToastyPigeon/new-ms-rp-test-ws
parameters:
select_topk:
- value: [0.05, 0.03, 0.02, 0.02, 0.01]
- model: unsloth/Mistral-Small-24B-Instruct-2501+estrogen/MS2501-24b-Ink-ep2-adpt
parameters:
select_topk: 0.1
- model: trashpanda-org/MS-24B-Instruct-Mullein-v0
parameters:
select_topk: 0.4
base_model: unsloth/Mistral-Small-24B-Base-2501
merge_method: sce
parameters:
int8_mask: true
rescale: true
normalize: true
dtype: bfloat16
tokenizer_source: base
```
```yaml
dtype: bfloat16
tokenizer_source: base
merge_method: della_linear
parameters:
density: 0.55
base_model: Step1
models:
- model: unsloth/Mistral-Small-24B-Instruct-2501
parameters:
weight:
- filter: v_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: o_proj
value: [1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1]
- filter: up_proj
value: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
- filter: gate_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: down_proj
value: [1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0]
- value: 0
- model: Step1
parameters:
weight:
- filter: v_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: o_proj
value: [0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0]
- filter: up_proj
value: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
- filter: gate_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: down_proj
value: [0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1]
- value: 1
```
Some early MS3 merge. Not really worth using on its own. Just added it for fun.
## RP-half1
```yaml
models:
- model: ArliAI/Mistral-Small-24B-ArliAI-RPMax-v1.4
parameters:
weight: 0.2
density: 0.7
- model: trashpanda-org/Llama3-24B-Mullein-v1
parameters:
weight: 0.2
density: 0.7
- model: TheDrummer/Cydonia-24B-v2
parameters:
weight: 0.2
density: 0.7
merge_method: della_linear
base_model: Nohobby/MS3-test-Merge-1
parameters:
epsilon: 0.2
lambda: 1.1
dtype: bfloat16
tokenizer:
source: base
```
## RP-half2
```yaml
base_model: Nohobby/MS3-test-Merge-1
parameters:
epsilon: 0.05
lambda: 0.9
int8_mask: true
rescale: true
normalize: false
dtype: bfloat16
tokenizer:
source: base
merge_method: della
models:
- model: estrogen/MS2501-24b-Ink-apollo-ep2
parameters:
weight: [0.1, -0.01, 0.1, -0.02, 0.1]
density: [0.6, 0.4, 0.5, 0.4, 0.6]
- model: huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated
parameters:
weight: [0.02, -0.01, 0.02, -0.02, 0.01]
density: [0.45, 0.55, 0.45, 0.55, 0.45]
- model: ToastyPigeon/ms3-roselily-rp-v2
parameters:
weight: [0.01, -0.02, 0.02, -0.025, 0.01]
density: [0.45, 0.65, 0.45, 0.65, 0.45]
- model: PocketDoc/Dans-DangerousWinds-V1.1.1-24b
parameters:
weight: [0.1, -0.01, 0.1, -0.02, 0.1]
density: [0.6, 0.4, 0.5, 0.4, 0.6]
```
## RP-broth/MS-RP-whole
```yaml
base_model: ReadyArt/Forgotten-Safeword-24B-V2.2
merge_method: model_stock
dtype: bfloat16
models:
- model: mergekit-community/MS3-RP-half1
- model: mergekit-community/MS3-RP-RP-half2
```
|
logasja/auramask-ensemble-brooklyn
|
logasja
| 2025-03-03T15:15:09Z | 2 | 0 |
keras
|
[
"keras",
"adversarial",
"aesthetic",
"quality",
"filter",
"image-to-image",
"dataset:logasja/FDF",
"base_model:logasja/ArcFace",
"base_model:finetune:logasja/ArcFace",
"license:gpl-3.0",
"region:us"
] |
image-to-image
| 2025-03-02T20:01:28Z |
---
library_name: keras
widget:
- text: input
output:
url: ./assets/input.png
- text: target
output:
url: ./assets/target.png
- text: output
output:
url: ./assets/output.png
datasets:
- logasja/FDF
base_model:
- vnet
- logasja/ArcFace
- logasja/VGGFace
metrics:
- TopIQ-FR
- ArcFace Cosine Distance
- VGGFace2 Cosine Distance
license: gpl-3.0
pipeline_tag: image-to-image
tags:
- adversarial
- aesthetic
- quality
- filter
---
<Gallery />
Training logs [here](https://wandb.ai/spuds/auramask/runs/9dc30dfddd79aa32c7d0d70a315dcd1c)
# Model Description
This model uses a modified vnet for 2D input/output implemented [here](https://github.com/logasja/keras3-unets) with the following configuration.
```json
{
"activation": "ReLU",
"batch_norm": false,
"filter_num": [
128,
256,
512,
1024,
1024
],
"n_labels": 3,
"output_activation": "tanh",
"pool": false,
"res_num_ini": 1,
"res_num_max": 3,
"unpool": false
}
```
```json
{
"alpha": 0.0001,
"batch": 32,
"epochs": 500,
"epsilon": 1,
"input": "(256, 256)",
"losses": {
"FEAT_ArcFace": {
"d": "cosine_similarity",
"f": "ArcFace",
"name": "FEAT_ArcFace",
"reduction": "sum_over_batch_size",
"threshold": 0.68,
"weight": 0.05
},
"FEAT_VGG-Face": {
"d": "cosine_similarity",
"f": "VGG-Face",
"name": "FEAT_VGG-Face",
"reduction": "sum_over_batch_size",
"threshold": 0.68,
"weight": 0.05
},
"IQASSIMC": {
"lower_better": false,
"name": "IQASSIMC",
"reduction": "sum_over_batch_size",
"weight": 0.5
},
"TopIQ": {
"full_ref": true,
"lower_better": false,
"name": "TopIQ",
"reduction": "sum_over_batch_size",
"score_range": "~0, ~1",
"weight": 0.5
}
},
"mixed_precision": true,
"optimizer": {
"amsgrad": false,
"beta_1": 0.9,
"beta_2": 0.999,
"clipnorm": null,
"clipvalue": null,
"ema_momentum": 0.99,
"ema_overwrite_frequency": null,
"epsilon": 1e-07,
"global_clipnorm": null,
"gradient_accumulation_steps": null,
"learning_rate": 9.999999747378752e-05,
"loss_scale_factor": null,
"name": "adamw",
"use_ema": false,
"weight_decay": 0.004
},
"seed": "BIIIIIGSTRETCH",
"testing": 0.01,
"training": 0.99
}
```
## Model Architecture Plot

|
juhw/uiop69
|
juhw
| 2025-03-03T15:15:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-03T15:11:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hllllllllll/test
|
hllllllllll
| 2025-03-03T15:14:59Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"chemistry",
"ab",
"dataset:open-thoughts/OpenThoughts-114k",
"license:apache-2.0",
"region:us"
] | null | 2025-03-03T15:05:05Z |
---
license: apache-2.0
datasets:
- open-thoughts/OpenThoughts-114k
language:
- ab
metrics:
- accuracy
new_version: deepseek-ai/DeepSeek-R1
library_name: adapter-transformers
tags:
- chemistry
---
|
aayeshanakarmi/distractor-generation-redstone-flant5small
|
aayeshanakarmi
| 2025-03-03T15:12:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-03-03T15:12:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
varunkarwa/Qwen2.5_OPUS_BOOK
|
varunkarwa
| 2025-03-03T15:09:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-02-26T16:04:14Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Shri2703/qwen2-5-1.5b-finetuned
|
Shri2703
| 2025-03-03T15:08:04Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen",
"lora",
"causal-lm",
"en",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-03-03T12:20:30Z |
---
language: en
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B-Instruct
tags:
- qwen
- lora
- peft
- causal-lm
---
# Qwen2.5-1.5B-Instruct Fine-tuned Model
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) using LoRA (Low-Rank Adaptation).
## Training Details
- Model was trained for 2 epochs on a custom dataset
- Used 4-bit quantization for efficient training
- Used the LoRA+ technique with 16.0 ratio
- Trained with a batch size of 1 and gradient accumulation steps of 12
|
saim1212/qwen2_7b_unfreezefinetune
|
saim1212
| 2025-03-03T15:07:32Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:adapter:Qwen/Qwen2-VL-7B-Instruct",
"license:other",
"region:us"
] | null | 2025-03-02T09:39:51Z |
---
library_name: peft
license: other
base_model: Qwen/Qwen2-VL-7B-Instruct
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: qwen2vl_lora_16lr_7b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen2vl_lora_16lr_7b
This model is a fine-tuned version of [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct) on the talk2car dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- total_eval_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.12.0
- Transformers 4.49.0
- Pytorch 2.4.0
- Datasets 3.1.0
- Tokenizers 0.21.0
|
taedam/1
|
taedam
| 2025-03-03T15:07:22Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-03-02T09:40:07Z |
---
license: other
license_name: '1'
license_link: LICENSE
---
|
od2025/alpha_ghost
|
od2025
| 2025-03-03T15:06:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"text-to-speech",
"annotation",
"en",
"dataset:parler-tts/mls_eng",
"dataset:parler-tts/libritts_r_filtered",
"dataset:parler-tts/libritts-r-filtered-speaker-descriptions",
"dataset:parler-tts/mls-eng-speaker-descriptions",
"arxiv:2402.01912",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2025-03-03T14:59:04Z |
---
library_name: transformers
tags:
- text-to-speech
- annotation
license: apache-2.0
language:
- en
pipeline_tag: text-to-speech
inference: false
datasets:
- parler-tts/mls_eng
- parler-tts/libritts_r_filtered
- parler-tts/libritts-r-filtered-speaker-descriptions
- parler-tts/mls-eng-speaker-descriptions
---
<img src="https://huggingface.co/datasets/parler-tts/images/resolve/main/thumbnail.png" alt="Parler Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Parler-TTS Mini v1
<a target="_blank" href="https://huggingface.co/spaces/parler-tts/parler_tts">
<img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/>
</a>
**Parler-TTS Mini v1** is a lightweight text-to-speech (TTS) model, trained on 45K hours of audio data, that can generate high-quality, natural sounding speech with features that can be controlled using a simple text prompt (e.g. gender, background noise, speaking rate, pitch and reverberation).
With [Parler-TTS Large v1](https://huggingface.co/parler-tts/parler-tts-large-v1), this is the second set of models published as part of the [Parler-TTS](https://github.com/huggingface/parler-tts) project, which aims to provide the community with TTS training resources and dataset pre-processing code.
## 📖 Quick Index
* [👨💻 Installation](#👨💻-installation)
* [🎲 Using a random voice](#🎲-random-voice)
* [🎯 Using a specific speaker](#🎯-using-a-specific-speaker)
* [Motivation](#motivation)
* [Optimizing inference](https://github.com/huggingface/parler-tts/blob/main/INFERENCE.md)
## 🛠️ Usage
### 👨💻 Installation
Using Parler-TTS is as simple as "bonjour". Simply install the library once:
```sh
pip install git+https://github.com/huggingface/parler-tts.git
```
### 🎲 Random voice
**Parler-TTS** has been trained to generate speech with features that can be controlled with a simple text prompt, for example:
```py
import torch
from parler_tts import ParlerTTSForConditionalGeneration
from transformers import AutoTokenizer
import soundfile as sf
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-mini-v1").to(device)
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-mini-v1")
prompt = "Hey, how are you doing today?"
description = "A female speaker delivers a slightly expressive and animated speech with a moderate speed and pitch. The recording is of very high quality, with the speaker's voice sounding clear and very close up."
input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
audio_arr = generation.cpu().numpy().squeeze()
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
```
### 🎯 Using a specific speaker
To ensure speaker consistency across generations, this checkpoint was also trained on 34 speakers, characterized by name (e.g. Jon, Lea, Gary, Jenna, Mike, Laura).
To take advantage of this, simply adapt your text description to specify which speaker to use: `Jon's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise.`
```py
import torch
from parler_tts import ParlerTTSForConditionalGeneration
from transformers import AutoTokenizer
import soundfile as sf
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-mini-v1").to(device)
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-mini-v1")
prompt = "Hey, how are you doing today?"
description = "Jon's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise."
input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
audio_arr = generation.cpu().numpy().squeeze()
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
```
**Tips**:
* We've set up an [inference guide](https://github.com/huggingface/parler-tts/blob/main/INFERENCE.md) to make generation faster. Think SDPA, torch.compile, batching and streaming!
* Include the term "very clear audio" to generate the highest quality audio, and "very noisy audio" for high levels of background noise
* Punctuation can be used to control the prosody of the generations, e.g. use commas to add small breaks in speech
* The remaining speech features (gender, speaking rate, pitch and reverberation) can be controlled directly through the prompt
## Motivation
Parler-TTS is a reproduction of work from the paper [Natural language guidance of high-fidelity text-to-speech with synthetic annotations](https://www.text-description-to-speech.com) by Dan Lyth and Simon King, from Stability AI and Edinburgh University respectively.
Contrarily to other TTS models, Parler-TTS is a **fully open-source** release. All of the datasets, pre-processing, training code and weights are released publicly under permissive license, enabling the community to build on our work and develop their own powerful TTS models.
Parler-TTS was released alongside:
* [The Parler-TTS repository](https://github.com/huggingface/parler-tts) - you can train and fine-tuned your own version of the model.
* [The Data-Speech repository](https://github.com/huggingface/dataspeech) - a suite of utility scripts designed to annotate speech datasets.
* [The Parler-TTS organization](https://huggingface.co/parler-tts) - where you can find the annotated datasets as well as the future checkpoints.
## Citation
If you found this repository useful, please consider citing this work and also the original Stability AI paper:
```
@misc{lacombe-etal-2024-parler-tts,
author = {Yoach Lacombe and Vaibhav Srivastav and Sanchit Gandhi},
title = {Parler-TTS},
year = {2024},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/huggingface/parler-tts}}
}
```
```
@misc{lyth2024natural,
title={Natural language guidance of high-fidelity text-to-speech with synthetic annotations},
author={Dan Lyth and Simon King},
year={2024},
eprint={2402.01912},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
## License
This model is permissively licensed under the Apache 2.0 license.
|
od2025/silent_byte
|
od2025
| 2025-03-03T15:04:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"text-to-speech",
"annotation",
"en",
"dataset:parler-tts/mls_eng",
"dataset:parler-tts/libritts_r_filtered",
"dataset:parler-tts/libritts-r-filtered-speaker-descriptions",
"dataset:parler-tts/mls-eng-speaker-descriptions",
"arxiv:2402.01912",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2025-03-03T14:57:59Z |
---
library_name: transformers
tags:
- text-to-speech
- annotation
license: apache-2.0
language:
- en
pipeline_tag: text-to-speech
inference: false
datasets:
- parler-tts/mls_eng
- parler-tts/libritts_r_filtered
- parler-tts/libritts-r-filtered-speaker-descriptions
- parler-tts/mls-eng-speaker-descriptions
---
<img src="https://huggingface.co/datasets/parler-tts/images/resolve/main/thumbnail.png" alt="Parler Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Parler-TTS Mini v1
<a target="_blank" href="https://huggingface.co/spaces/parler-tts/parler_tts">
<img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/>
</a>
**Parler-TTS Mini v1** is a lightweight text-to-speech (TTS) model, trained on 45K hours of audio data, that can generate high-quality, natural sounding speech with features that can be controlled using a simple text prompt (e.g. gender, background noise, speaking rate, pitch and reverberation).
With [Parler-TTS Large v1](https://huggingface.co/parler-tts/parler-tts-large-v1), this is the second set of models published as part of the [Parler-TTS](https://github.com/huggingface/parler-tts) project, which aims to provide the community with TTS training resources and dataset pre-processing code.
## 📖 Quick Index
* [👨💻 Installation](#👨💻-installation)
* [🎲 Using a random voice](#🎲-random-voice)
* [🎯 Using a specific speaker](#🎯-using-a-specific-speaker)
* [Motivation](#motivation)
* [Optimizing inference](https://github.com/huggingface/parler-tts/blob/main/INFERENCE.md)
## 🛠️ Usage
### 👨💻 Installation
Using Parler-TTS is as simple as "bonjour". Simply install the library once:
```sh
pip install git+https://github.com/huggingface/parler-tts.git
```
### 🎲 Random voice
**Parler-TTS** has been trained to generate speech with features that can be controlled with a simple text prompt, for example:
```py
import torch
from parler_tts import ParlerTTSForConditionalGeneration
from transformers import AutoTokenizer
import soundfile as sf
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-mini-v1").to(device)
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-mini-v1")
prompt = "Hey, how are you doing today?"
description = "A female speaker delivers a slightly expressive and animated speech with a moderate speed and pitch. The recording is of very high quality, with the speaker's voice sounding clear and very close up."
input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
audio_arr = generation.cpu().numpy().squeeze()
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
```
### 🎯 Using a specific speaker
To ensure speaker consistency across generations, this checkpoint was also trained on 34 speakers, characterized by name (e.g. Jon, Lea, Gary, Jenna, Mike, Laura).
To take advantage of this, simply adapt your text description to specify which speaker to use: `Jon's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise.`
```py
import torch
from parler_tts import ParlerTTSForConditionalGeneration
from transformers import AutoTokenizer
import soundfile as sf
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-mini-v1").to(device)
tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-mini-v1")
prompt = "Hey, how are you doing today?"
description = "Jon's voice is monotone yet slightly fast in delivery, with a very close recording that almost has no background noise."
input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
audio_arr = generation.cpu().numpy().squeeze()
sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate)
```
**Tips**:
* We've set up an [inference guide](https://github.com/huggingface/parler-tts/blob/main/INFERENCE.md) to make generation faster. Think SDPA, torch.compile, batching and streaming!
* Include the term "very clear audio" to generate the highest quality audio, and "very noisy audio" for high levels of background noise
* Punctuation can be used to control the prosody of the generations, e.g. use commas to add small breaks in speech
* The remaining speech features (gender, speaking rate, pitch and reverberation) can be controlled directly through the prompt
## Motivation
Parler-TTS is a reproduction of work from the paper [Natural language guidance of high-fidelity text-to-speech with synthetic annotations](https://www.text-description-to-speech.com) by Dan Lyth and Simon King, from Stability AI and Edinburgh University respectively.
Contrarily to other TTS models, Parler-TTS is a **fully open-source** release. All of the datasets, pre-processing, training code and weights are released publicly under permissive license, enabling the community to build on our work and develop their own powerful TTS models.
Parler-TTS was released alongside:
* [The Parler-TTS repository](https://github.com/huggingface/parler-tts) - you can train and fine-tuned your own version of the model.
* [The Data-Speech repository](https://github.com/huggingface/dataspeech) - a suite of utility scripts designed to annotate speech datasets.
* [The Parler-TTS organization](https://huggingface.co/parler-tts) - where you can find the annotated datasets as well as the future checkpoints.
## Citation
If you found this repository useful, please consider citing this work and also the original Stability AI paper:
```
@misc{lacombe-etal-2024-parler-tts,
author = {Yoach Lacombe and Vaibhav Srivastav and Sanchit Gandhi},
title = {Parler-TTS},
year = {2024},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/huggingface/parler-tts}}
}
```
```
@misc{lyth2024natural,
title={Natural language guidance of high-fidelity text-to-speech with synthetic annotations},
author={Dan Lyth and Simon King},
year={2024},
eprint={2402.01912},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
## License
This model is permissively licensed under the Apache 2.0 license.
|
jedzqg/zhihu-model
|
jedzqg
| 2025-03-03T15:00:57Z | 0 | 0 | null |
[
"gguf",
"llama",
"zh",
"dataset:wangrui6/Zhihu-KOL",
"base_model:unsloth/DeepSeek-R1-Distill-Llama-8B",
"base_model:quantized:unsloth/DeepSeek-R1-Distill-Llama-8B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-03T14:43:06Z |
---
datasets:
- wangrui6/Zhihu-KOL
language:
- zh
base_model:
- unsloth/DeepSeek-R1-Distill-Llama-8B
---
该模型由unsloth/DeepSeek-R1-Distill-Llama-8B通过https://hf-mirror.com/datasets/wangrui6/Zhihu-KOL 数据集微调而来
|
ObeJ/granite-3.2-8b-instruct-Q8_0-GGUF
|
ObeJ
| 2025-03-03T14:58:19Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"language",
"granite-3.2",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:ibm-granite/granite-3.2-8b-instruct",
"base_model:quantized:ibm-granite/granite-3.2-8b-instruct",
"license:apache-2.0",
"region:us",
"conversational"
] |
text-generation
| 2025-03-03T14:57:42Z |
---
pipeline_tag: text-generation
inference: false
license: apache-2.0
library_name: transformers
tags:
- language
- granite-3.2
- llama-cpp
- gguf-my-repo
base_model: ibm-granite/granite-3.2-8b-instruct
---
# ObeJ/granite-3.2-8b-instruct-Q8_0-GGUF
This model was converted to GGUF format from [`ibm-granite/granite-3.2-8b-instruct`](https://huggingface.co/ibm-granite/granite-3.2-8b-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ibm-granite/granite-3.2-8b-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo ObeJ/granite-3.2-8b-instruct-Q8_0-GGUF --hf-file granite-3.2-8b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo ObeJ/granite-3.2-8b-instruct-Q8_0-GGUF --hf-file granite-3.2-8b-instruct-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo ObeJ/granite-3.2-8b-instruct-Q8_0-GGUF --hf-file granite-3.2-8b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo ObeJ/granite-3.2-8b-instruct-Q8_0-GGUF --hf-file granite-3.2-8b-instruct-q8_0.gguf -c 2048
```
|
vinit2534/deepseek_finetuned2
|
vinit2534
| 2025-03-03T14:57:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-03T14:57:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ahmadicitian/ats
|
ahmadicitian
| 2025-03-03T14:56:49Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-03-03T14:56:49Z |
---
license: apache-2.0
---
|
mradermacher/Mistral-Small-Sisyphus-24b-2503-i1-GGUF
|
mradermacher
| 2025-03-03T14:56:19Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"axolotl",
"en",
"zh",
"dataset:allenai/tulu-3-sft-personas-instruction-following",
"dataset:simplescaling/s1K-1.1",
"dataset:simplescaling/s1K-claude-3-7-sonnet",
"dataset:reedmayhew/medical-o1-reasoning-SFT-jsonl",
"dataset:OpenCoder-LLM/opc-sft-stage1",
"dataset:PocketDoc/Dans-Kinomaxx-VanillaBackrooms",
"dataset:cognitivecomputations/SystemChat-2.0",
"dataset:anthracite-org/kalo-opus-instruct-22k-no-refusal",
"dataset:allura-org/scienceqa_sharegpt",
"base_model:allura-org/Mistral-Small-Sisyphus-24b-2503",
"base_model:quantized:allura-org/Mistral-Small-Sisyphus-24b-2503",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-03-03T11:44:04Z |
---
base_model: allura-org/Mistral-Small-Sisyphus-24b-2503
datasets:
- allenai/tulu-3-sft-personas-instruction-following
- simplescaling/s1K-1.1
- simplescaling/s1K-claude-3-7-sonnet
- reedmayhew/medical-o1-reasoning-SFT-jsonl
- OpenCoder-LLM/opc-sft-stage1
- PocketDoc/Dans-Kinomaxx-VanillaBackrooms
- cognitivecomputations/SystemChat-2.0
- anthracite-org/kalo-opus-instruct-22k-no-refusal
- allura-org/scienceqa_sharegpt
language:
- en
- zh
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- axolotl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/allura-org/Mistral-Small-Sisyphus-24b-2503
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Mistral-Small-Sisyphus-24b-2503-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-Sisyphus-24b-2503-i1-GGUF/resolve/main/Mistral-Small-Sisyphus-24b-2503.i1-IQ1_S.gguf) | i1-IQ1_S | 5.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-Sisyphus-24b-2503-i1-GGUF/resolve/main/Mistral-Small-Sisyphus-24b-2503.i1-IQ1_M.gguf) | i1-IQ1_M | 5.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-Sisyphus-24b-2503-i1-GGUF/resolve/main/Mistral-Small-Sisyphus-24b-2503.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-Sisyphus-24b-2503-i1-GGUF/resolve/main/Mistral-Small-Sisyphus-24b-2503.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-Sisyphus-24b-2503-i1-GGUF/resolve/main/Mistral-Small-Sisyphus-24b-2503.i1-IQ2_S.gguf) | i1-IQ2_S | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-Sisyphus-24b-2503-i1-GGUF/resolve/main/Mistral-Small-Sisyphus-24b-2503.i1-IQ2_M.gguf) | i1-IQ2_M | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-Sisyphus-24b-2503-i1-GGUF/resolve/main/Mistral-Small-Sisyphus-24b-2503.i1-Q2_K_S.gguf) | i1-Q2_K_S | 8.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-Sisyphus-24b-2503-i1-GGUF/resolve/main/Mistral-Small-Sisyphus-24b-2503.i1-Q2_K.gguf) | i1-Q2_K | 9.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-Sisyphus-24b-2503-i1-GGUF/resolve/main/Mistral-Small-Sisyphus-24b-2503.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-Sisyphus-24b-2503-i1-GGUF/resolve/main/Mistral-Small-Sisyphus-24b-2503.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-Sisyphus-24b-2503-i1-GGUF/resolve/main/Mistral-Small-Sisyphus-24b-2503.i1-Q3_K_S.gguf) | i1-Q3_K_S | 10.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-Sisyphus-24b-2503-i1-GGUF/resolve/main/Mistral-Small-Sisyphus-24b-2503.i1-IQ3_S.gguf) | i1-IQ3_S | 10.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-Sisyphus-24b-2503-i1-GGUF/resolve/main/Mistral-Small-Sisyphus-24b-2503.i1-IQ3_M.gguf) | i1-IQ3_M | 10.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-Sisyphus-24b-2503-i1-GGUF/resolve/main/Mistral-Small-Sisyphus-24b-2503.i1-Q3_K_M.gguf) | i1-Q3_K_M | 11.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-Sisyphus-24b-2503-i1-GGUF/resolve/main/Mistral-Small-Sisyphus-24b-2503.i1-Q3_K_L.gguf) | i1-Q3_K_L | 12.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-Sisyphus-24b-2503-i1-GGUF/resolve/main/Mistral-Small-Sisyphus-24b-2503.i1-IQ4_XS.gguf) | i1-IQ4_XS | 12.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-Sisyphus-24b-2503-i1-GGUF/resolve/main/Mistral-Small-Sisyphus-24b-2503.i1-Q4_0.gguf) | i1-Q4_0 | 13.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-Sisyphus-24b-2503-i1-GGUF/resolve/main/Mistral-Small-Sisyphus-24b-2503.i1-Q4_K_S.gguf) | i1-Q4_K_S | 13.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-Sisyphus-24b-2503-i1-GGUF/resolve/main/Mistral-Small-Sisyphus-24b-2503.i1-Q4_K_M.gguf) | i1-Q4_K_M | 14.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-Sisyphus-24b-2503-i1-GGUF/resolve/main/Mistral-Small-Sisyphus-24b-2503.i1-Q4_1.gguf) | i1-Q4_1 | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-Sisyphus-24b-2503-i1-GGUF/resolve/main/Mistral-Small-Sisyphus-24b-2503.i1-Q5_K_S.gguf) | i1-Q5_K_S | 16.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-Sisyphus-24b-2503-i1-GGUF/resolve/main/Mistral-Small-Sisyphus-24b-2503.i1-Q5_K_M.gguf) | i1-Q5_K_M | 16.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-Sisyphus-24b-2503-i1-GGUF/resolve/main/Mistral-Small-Sisyphus-24b-2503.i1-Q6_K.gguf) | i1-Q6_K | 19.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Qwen-2.5-7B-Threatflux-i1-GGUF
|
mradermacher
| 2025-03-03T14:56:19Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:vtriple/Qwen-2.5-7B-Threatflux",
"base_model:quantized:vtriple/Qwen-2.5-7B-Threatflux",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-03-03T11:37:55Z |
---
base_model: vtriple/Qwen-2.5-7B-Threatflux
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/vtriple/Qwen-2.5-7B-Threatflux
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen-2.5-7B-Threatflux-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-7B-Threatflux-i1-GGUF/resolve/main/Qwen-2.5-7B-Threatflux.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-7B-Threatflux-i1-GGUF/resolve/main/Qwen-2.5-7B-Threatflux.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-7B-Threatflux-i1-GGUF/resolve/main/Qwen-2.5-7B-Threatflux.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-7B-Threatflux-i1-GGUF/resolve/main/Qwen-2.5-7B-Threatflux.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-7B-Threatflux-i1-GGUF/resolve/main/Qwen-2.5-7B-Threatflux.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-7B-Threatflux-i1-GGUF/resolve/main/Qwen-2.5-7B-Threatflux.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-7B-Threatflux-i1-GGUF/resolve/main/Qwen-2.5-7B-Threatflux.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-7B-Threatflux-i1-GGUF/resolve/main/Qwen-2.5-7B-Threatflux.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-7B-Threatflux-i1-GGUF/resolve/main/Qwen-2.5-7B-Threatflux.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-7B-Threatflux-i1-GGUF/resolve/main/Qwen-2.5-7B-Threatflux.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-7B-Threatflux-i1-GGUF/resolve/main/Qwen-2.5-7B-Threatflux.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-7B-Threatflux-i1-GGUF/resolve/main/Qwen-2.5-7B-Threatflux.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-7B-Threatflux-i1-GGUF/resolve/main/Qwen-2.5-7B-Threatflux.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-7B-Threatflux-i1-GGUF/resolve/main/Qwen-2.5-7B-Threatflux.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-7B-Threatflux-i1-GGUF/resolve/main/Qwen-2.5-7B-Threatflux.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-7B-Threatflux-i1-GGUF/resolve/main/Qwen-2.5-7B-Threatflux.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-7B-Threatflux-i1-GGUF/resolve/main/Qwen-2.5-7B-Threatflux.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-7B-Threatflux-i1-GGUF/resolve/main/Qwen-2.5-7B-Threatflux.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-7B-Threatflux-i1-GGUF/resolve/main/Qwen-2.5-7B-Threatflux.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-7B-Threatflux-i1-GGUF/resolve/main/Qwen-2.5-7B-Threatflux.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-7B-Threatflux-i1-GGUF/resolve/main/Qwen-2.5-7B-Threatflux.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-7B-Threatflux-i1-GGUF/resolve/main/Qwen-2.5-7B-Threatflux.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-7B-Threatflux-i1-GGUF/resolve/main/Qwen-2.5-7B-Threatflux.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-7B-Threatflux-i1-GGUF/resolve/main/Qwen-2.5-7B-Threatflux.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
kiranshivaraju/LLama-manufacturing-8B-Instruct-test
|
kiranshivaraju
| 2025-03-03T14:56:01Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-03T14:35:00Z |
---
base_model: unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** kiranshivaraju
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mergekit-community/L3.1-Athena-b-8B
|
mergekit-community
| 2025-03-03T14:54:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B",
"base_model:merge:Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B",
"base_model:Hastagaras/Jamet-8B-L3-MK.V-Blackroot",
"base_model:merge:Hastagaras/Jamet-8B-L3-MK.V-Blackroot",
"base_model:MathGenie/MathCoder2-Llama-3-8B",
"base_model:merge:MathGenie/MathCoder2-Llama-3-8B",
"base_model:NousResearch/Hermes-3-Llama-3.1-8B",
"base_model:merge:NousResearch/Hermes-3-Llama-3.1-8B",
"base_model:Sao10K/L3-8B-Lunaris-v1",
"base_model:merge:Sao10K/L3-8B-Lunaris-v1",
"base_model:Skywork/Skywork-o1-Open-Llama-3.1-8B",
"base_model:merge:Skywork/Skywork-o1-Open-Llama-3.1-8B",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"base_model:merge:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"base_model:mergekit-community/L3-Boshima-a",
"base_model:merge:mergekit-community/L3-Boshima-a",
"base_model:mergekit-community/L3.1-Athena-a-8B",
"base_model:merge:mergekit-community/L3.1-Athena-a-8B",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:merge:meta-llama/Llama-3.1-8B",
"base_model:mlabonne/NeuralDaredevil-8B-abliterated",
"base_model:merge:mlabonne/NeuralDaredevil-8B-abliterated",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-03T14:47:24Z |
---
base_model:
- meta-llama/Llama-3.1-8B
- Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
- deepseek-ai/DeepSeek-R1-Distill-Llama-8B
- mergekit-community/L3.1-Athena-a-8B
- Sao10K/L3-8B-Lunaris-v1
- mlabonne/NeuralDaredevil-8B-abliterated
- mergekit-community/L3-Boshima-a
- Skywork/Skywork-o1-Open-Llama-3.1-8B
- MathGenie/MathCoder2-Llama-3-8B
- NousResearch/Hermes-3-Llama-3.1-8B
- Hastagaras/Jamet-8B-L3-MK.V-Blackroot
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [mergekit-community/L3.1-Athena-a-8B](https://huggingface.co/mergekit-community/L3.1-Athena-a-8B) as a base.
### Models Merged
The following models were included in the merge:
* [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B)
* [Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B)
* [deepseek-ai/DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B)
* [Sao10K/L3-8B-Lunaris-v1](https://huggingface.co/Sao10K/L3-8B-Lunaris-v1)
* [mlabonne/NeuralDaredevil-8B-abliterated](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated)
* [mergekit-community/L3-Boshima-a](https://huggingface.co/mergekit-community/L3-Boshima-a)
* [Skywork/Skywork-o1-Open-Llama-3.1-8B](https://huggingface.co/Skywork/Skywork-o1-Open-Llama-3.1-8B)
* [MathGenie/MathCoder2-Llama-3-8B](https://huggingface.co/MathGenie/MathCoder2-Llama-3-8B)
* [NousResearch/Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B)
* [Hastagaras/Jamet-8B-L3-MK.V-Blackroot](https://huggingface.co/Hastagaras/Jamet-8B-L3-MK.V-Blackroot)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
out_dtype: bfloat16
merge_method: model_stock
base_model: mergekit-community/L3.1-Athena-a-8B
models:
- model: meta-llama/Llama-3.1-8B
- model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B
- model: Skywork/Skywork-o1-Open-Llama-3.1-8B
- model: MathGenie/MathCoder2-Llama-3-8B
- model: NousResearch/Hermes-3-Llama-3.1-8B
- model: mlabonne/NeuralDaredevil-8B-abliterated
- model: Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
- model: Sao10K/L3-8B-Lunaris-v1
- model: mergekit-community/L3-Boshima-a
- model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
```
|
Garak313/llama3_8B_Kardassi_16bit
|
Garak313
| 2025-03-03T14:53:52Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-03T14:44:54Z |
---
base_model: unsloth/llama-3-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Garak313
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
TheRobotNetwork/Llama-3.1-8B-Instruct
|
TheRobotNetwork
| 2025-03-03T14:53:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-03T14:48:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ElenaSenger/career-path-representation-mpnet-karrierewege-cp
|
ElenaSenger
| 2025-03-03T14:53:42Z | 0 | 1 | null |
[
"safetensors",
"mpnet",
"en",
"dataset:ElenaSenger/Karrierewege_plus",
"base_model:sentence-transformers/all-mpnet-base-v2",
"base_model:finetune:sentence-transformers/all-mpnet-base-v2",
"license:apache-2.0",
"region:us"
] | null | 2025-03-03T14:51:07Z |
---
license: apache-2.0
datasets:
- ElenaSenger/Karrierewege_plus
language:
- en
base_model:
- sentence-transformers/all-mpnet-base-v2
---
# career-path-representation-mpnet-karrierewege-cp
This is a fine-tuned version of [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) on custom data.
For fine-tuning details, preprocessing code, and how to use this model for career path prediction, visit our GitHub repository: https://github.com/elenasenger/karrierewege
## Model Details
- **Base Model**: `sentence-transformers/all-mpnet-base-v2`
- **Fine-tuned on standardized**: `ElenaSenger/Karrierewege_plus`
- **Tasks**: Sentence Embeddings / Text Similarity
- **License**: Apache-2.0
## Usage
```python
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("ElenaSenger/career-path-representation-mpnet-karrierewege-cp")
tokenizer = AutoTokenizer.from_pretrained("ElenaSenger/career-path-representation-mpnet-karrierewege-cp")
|
Captaint2004/Qwen2_VL_2B_GGUF_UNSLOTH_T
|
Captaint2004
| 2025-03-03T14:52:56Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen2_vl",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-03T14:51:56Z |
---
base_model: unsloth/qwen2-vl-2b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_vl
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Captaint2004
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2-vl-2b-instruct-unsloth-bnb-4bit
This qwen2_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
falcongoldman/nexusai-v3
|
falcongoldman
| 2025-03-03T14:51:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma2",
"trl",
"en",
"base_model:unsloth/gemma-2-9b-bnb-4bit",
"base_model:finetune:unsloth/gemma-2-9b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-03T14:51:40Z |
---
base_model: unsloth/gemma-2-9b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** falcongoldman
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-9b-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
fats-fme/feb3e71e-4297-4189-a5f1-14a82032507e
|
fats-fme
| 2025-03-03T14:47:41Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:openlm-research/open_llama_3b",
"base_model:adapter:openlm-research/open_llama_3b",
"license:apache-2.0",
"region:us"
] | null | 2025-03-03T13:56:37Z |
---
library_name: peft
license: apache-2.0
base_model: openlm-research/open_llama_3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: feb3e71e-4297-4189-a5f1-14a82032507e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: openlm-research/open_llama_3b
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5668b44a803035dd_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5668b44a803035dd_train_data.json
type:
field_input: concepts
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: fats-fme/feb3e71e-4297-4189-a5f1-14a82032507e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 3.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 70GB
max_steps: 100
micro_batch_size: 1
mlflow_experiment_name: /tmp/5668b44a803035dd_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 2048
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: eeda891a-7fde-4fb3-9e00-351e4d608c46
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: eeda891a-7fde-4fb3-9e00-351e4d608c46
warmup_steps: 200
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# feb3e71e-4297-4189-a5f1-14a82032507e
This model is a fine-tuned version of [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 200
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8959 | 0.0003 | 1 | 0.9655 |
| 0.8303 | 0.0167 | 50 | 0.7848 |
| 0.5512 | 0.0334 | 100 | 0.5544 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Quangnguyen711/codebert-solidity-time-dep
|
Quangnguyen711
| 2025-03-03T14:44:19Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-02-27T09:08:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TheJoeZenOne/qwen-0.5-reasoning
|
TheJoeZenOne
| 2025-03-03T14:43:56Z | 0 | 0 |
transformers
|
[
"transformers",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-03T14:43:18Z |
---
base_model: unsloth/qwen2-0.5b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** TheJoeZenOne
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2-0.5b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
takedakoji00/Llama-3.3-70B-Instruct-custom-qg-full_20250303-7th_random_300
|
takedakoji00
| 2025-03-03T14:42:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-03T06:59:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
memevis/SG13
|
memevis
| 2025-03-03T14:42:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-03T14:36:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zelk12/MT-Merge7-gemma-2-9B-Q6_K-GGUF
|
zelk12
| 2025-03-03T14:40:59Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:zelk12/MT-Merge7-gemma-2-9B",
"base_model:quantized:zelk12/MT-Merge7-gemma-2-9B",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-03-03T14:40:15Z |
---
base_model: zelk12/MT-Merge7-gemma-2-9B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
license: gemma
pipeline_tag: text-generation
---
# zelk12/MT-Merge7-gemma-2-9B-Q6_K-GGUF
This model was converted to GGUF format from [`zelk12/MT-Merge7-gemma-2-9B`](https://huggingface.co/zelk12/MT-Merge7-gemma-2-9B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/zelk12/MT-Merge7-gemma-2-9B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo zelk12/MT-Merge7-gemma-2-9B-Q6_K-GGUF --hf-file mt-merge7-gemma-2-9b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo zelk12/MT-Merge7-gemma-2-9B-Q6_K-GGUF --hf-file mt-merge7-gemma-2-9b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo zelk12/MT-Merge7-gemma-2-9B-Q6_K-GGUF --hf-file mt-merge7-gemma-2-9b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo zelk12/MT-Merge7-gemma-2-9B-Q6_K-GGUF --hf-file mt-merge7-gemma-2-9b-q6_k.gguf -c 2048
```
|
ElenaSenger/career-path-representation-mpnet-decorte-esco
|
ElenaSenger
| 2025-03-03T14:37:44Z | 0 | 1 | null |
[
"safetensors",
"mpnet",
"en",
"dataset:TechWolf/anonymous-working-histories",
"base_model:sentence-transformers/all-mpnet-base-v2",
"base_model:finetune:sentence-transformers/all-mpnet-base-v2",
"license:apache-2.0",
"region:us"
] | null | 2025-03-03T14:33:30Z |
---
license: apache-2.0
datasets:
- TechWolf/anonymous-working-histories
language:
- en
base_model:
- sentence-transformers/all-mpnet-base-v2
---
# career-path-representation-mpnet-decorte-esco
This is a fine-tuned version of [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) on custom data.
For fine-tuning details, preprocessing code, and how to use this model for career path prediction, visit our GitHub repository: https://github.com/elenasenger/karrierewege
## Model Details
- **Base Model**: `sentence-transformers/all-mpnet-base-v2`
- **Fine-tuned on standardized**: `TechWolf/anonymous-working-histories`
- **Tasks**: Sentence Embeddings / Text Similarity
- **License**: Apache-2.0
## Usage
```python
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("ElenaSenger/career-path-representation-mpnet-decorte-esco")
tokenizer = AutoTokenizer.from_pretrained("ElenaSenger/career-path-representation-mpnet-decorte-esco")
|
Captaint2004/Qwen2_VL_2B_GGUF_UNSLOTH
|
Captaint2004
| 2025-03-03T14:36:22Z | 20 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen2_vl",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-02T06:00:31Z |
---
base_model: unsloth/qwen2-vl-2b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_vl
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Captaint2004
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2-vl-2b-instruct-unsloth-bnb-4bit
This qwen2_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Kort/xf2
|
Kort
| 2025-03-03T14:35:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-03T13:40:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
memevis/SG18
|
memevis
| 2025-03-03T14:35:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-03T14:30:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kiriyk/seo_tg_book_overtrained_gguf
|
kiriyk
| 2025-03-03T14:34:46Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-03T14:32:32Z |
---
base_model: unsloth/qwen2.5-7b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** kiriyk
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
memevis/SG12
|
memevis
| 2025-03-03T14:34:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-03T14:28:44Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
memevis/SG10
|
memevis
| 2025-03-03T14:34:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-03T14:28:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rell07/Omari
|
rell07
| 2025-03-03T14:31:44Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-03-03T14:02:53Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Omari
---
# Omari
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Omari` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('rell07/Omari', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
okamototk/DeepSeek-R1-Distill-Qwen-14B-imatrix-gguf
|
okamototk
| 2025-03-03T14:30:52Z | 50 | 0 | null |
[
"gguf",
"dataset:TFMC/imatrix-dataset-for-japanese-llm",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B",
"base_model:quantized:deepseek-ai/DeepSeek-R1-Distill-Qwen-14B",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-02-04T05:45:00Z |
---
license: mit
datasets:
- TFMC/imatrix-dataset-for-japanese-llm
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
---
## 1. Introduction
This model is quantized version of DeepSeek-R1-Distill-Qwen-14B with dataset for imatrix [TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm).
Usgin English/Japanese mixed and quantization is tuned for Japanese.
## 2. License
This code repository and the model weights are licensed under the [MIT License](LICENSE).
DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that:
- DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from [Qwen-2.5 series](https://github.com/QwenLM/Qwen2.5), which are originally licensed under [Apache 2.0 License](LICENSE-Qwen), and now finetuned with 800k samples curated with DeepSeek-R1.
- DeepSeek-R1-Distill-Llama-8B is derived from Llama3.1-8B-Base and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE).
- DeepSeek-R1-Distill-Llama-70B is derived from Llama3.3-70B-Instruct and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE).
|
leannmlindsey/hg38-uni-v4096
|
leannmlindsey
| 2025-03-03T14:30:06Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-02-28T19:35:27Z |
---
{}
---
This is a specialized BPE tokenizer trained on human DNA sequences and has a vocabulary size of 4096.
## Special Tokens
- PAD: [PAD]
- UNK: [UNK]
- CLS: [CLS]
- SEP: [SEP]
- MASK: [MASK]
## Usage
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("leannmlindsey/hg38-bpe-v4096")
# Example usage
sequences = ["ATCGATCGATCG", "GCTAGCTAGCTA"]
encoded = tokenizer(sequences)
```
## Training Information
This tokenizer was trained using the HuggingFace Tokenizers library with the BPE algorithm trained on the Hg38 Human Reference Genome.
|
daniel40/6142dc13-b92f-484b-8dab-7837c9954217
|
daniel40
| 2025-03-03T14:30:06Z | 0 | 0 |
peft
|
[
"peft",
"generated_from_trainer",
"base_model:unsloth/Mistral-Nemo-Instruct-2407",
"base_model:adapter:unsloth/Mistral-Nemo-Instruct-2407",
"region:us"
] | null | 2025-03-03T14:29:50Z |
---
library_name: peft
tags:
- generated_from_trainer
base_model: unsloth/Mistral-Nemo-Instruct-2407
model-index:
- name: daniel40/6142dc13-b92f-484b-8dab-7837c9954217
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# daniel40/6142dc13-b92f-484b-8dab-7837c9954217
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8368
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Garak313/kardassi_lora
|
Garak313
| 2025-03-03T14:29:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-03T14:29:50Z |
---
base_model: unsloth/llama-3-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Garak313
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
matiashoyl/modernbert-match-user-22601
|
matiashoyl
| 2025-03-03T14:28:14Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"modernbert",
"text-classification",
"generated_from_trainer",
"base_model:answerdotai/ModernBERT-base",
"base_model:finetune:answerdotai/ModernBERT-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-03-03T04:38:23Z |
---
library_name: transformers
license: apache-2.0
base_model: answerdotai/ModernBERT-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: modernbert-match-user-22601
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# modernbert-match-user-22601
This model is a fine-tuned version of [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8690
- Accuracy: 0.6574
- F1: 0.6004
- Precision: 0.6355
- Recall: 0.6574
- F1 Class 0: 0.2353
- F1 Class 1: 0.3333
- F1 Class 2: 0.4762
- F1 Class 3: 0.1667
- F1 Class 4: 0.7922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 107
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Accuracy | F1 | F1 Class 0 | F1 Class 1 | F1 Class 2 | F1 Class 3 | F1 Class 4 | Validation Loss | Precision | Recall |
|:-------------:|:-----:|:----:|:--------:|:------:|:----------:|:----------:|:----------:|:----------:|:----------:|:---------------:|:---------:|:------:|
| 1.1146 | 1.0 | 107 | 0.6667 | 0.5921 | 0.3 | 0.3636 | 0.4 | 0.0 | 0.7950 | 1.0552 | 0.6154 | 0.6667 |
| 1.0384 | 2.0 | 214 | 0.6852 | 0.6377 | 0.2857 | 0.4 | 0.4 | 0.3636 | 0.8182 | 1.0211 | 0.6736 | 0.6852 |
| 0.8178 | 3.0 | 321 | 0.6852 | 0.6026 | 0.2353 | 0.3636 | 0.5714 | 0.0 | 0.8 | 1.4185 | 0.6698 | 0.6852 |
| 0.8188 | 4.0 | 428 | 0.6852 | 0.6185 | 0.3158 | 0.3333 | 0.4286 | 0.2 | 0.8075 | 1.3625 | 0.7042 | 0.6852 |
| 0.8081 | 5.0 | 535 | 0.6667 | 0.6158 | 0.2353 | 0.3636 | 0.4545 | 0.3077 | 0.7974 | 1.3901 | 0.6785 | 0.6667 |
| 0.6187 | 6.0 | 642 | 1.7125 | 0.6667 | 0.6071 | 0.6361 | 0.6667 | 0.2222 | 0.3077 | 0.4286 | 0.3077 | 0.7975 |
| 0.6079 | 7.0 | 749 | 1.7332 | 0.6389 | 0.5824 | 0.5972 | 0.6389 | 0.2222 | 0.2353 | 0.375 | 0.1818 | 0.7922 |
| 0.7141 | 8.0 | 856 | 1.8690 | 0.6574 | 0.6004 | 0.6355 | 0.6574 | 0.2353 | 0.3333 | 0.4762 | 0.1667 | 0.7922 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0
- Datasets 2.21.0
- Tokenizers 0.21.0
|
mikeeilertsen/mike-eilertsen
|
mikeeilertsen
| 2025-03-03T14:26:04Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-03-03T13:48:49Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: fakemike
---
# Mike Eilertsen
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `fakemike` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('mikeeilertsen/mike-eilertsen', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
molocchus/calculator_model_test
|
molocchus
| 2025-03-03T14:23:50Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-03-03T14:22:48Z |
---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: calculator_model_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# calculator_model_test
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 512
- eval_batch_size: 512
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9669 | 1.0 | 6 | 0.6731 |
| 0.4986 | 2.0 | 12 | 0.3548 |
| 0.3213 | 3.0 | 18 | 0.2928 |
| 0.2867 | 4.0 | 24 | 0.2126 |
| 0.2162 | 5.0 | 30 | 0.1518 |
| 0.167 | 6.0 | 36 | 0.1237 |
| 0.1443 | 7.0 | 42 | 0.0969 |
| 0.1235 | 8.0 | 48 | 0.0684 |
| 0.1002 | 9.0 | 54 | 0.0610 |
| 0.0928 | 10.0 | 60 | 0.0616 |
| 0.0843 | 11.0 | 66 | 0.0507 |
| 0.0737 | 12.0 | 72 | 0.0435 |
| 0.0606 | 13.0 | 78 | 0.0387 |
| 0.0583 | 14.0 | 84 | 0.0363 |
| 0.0529 | 15.0 | 90 | 0.0278 |
| 0.0515 | 16.0 | 96 | 0.0267 |
| 0.0539 | 17.0 | 102 | 0.0263 |
| 0.0514 | 18.0 | 108 | 0.0312 |
| 0.0498 | 19.0 | 114 | 0.0248 |
| 0.0405 | 20.0 | 120 | 0.0252 |
| 0.0376 | 21.0 | 126 | 0.0242 |
| 0.0417 | 22.0 | 132 | 0.0279 |
| 0.0361 | 23.0 | 138 | 0.0219 |
| 0.0327 | 24.0 | 144 | 0.0152 |
| 0.0288 | 25.0 | 150 | 0.0146 |
| 0.0253 | 26.0 | 156 | 0.0162 |
| 0.0223 | 27.0 | 162 | 0.0140 |
| 0.0207 | 28.0 | 168 | 0.0118 |
| 0.0198 | 29.0 | 174 | 0.0108 |
| 0.0191 | 30.0 | 180 | 0.0109 |
| 0.0172 | 31.0 | 186 | 0.0096 |
| 0.0165 | 32.0 | 192 | 0.0093 |
| 0.0153 | 33.0 | 198 | 0.0091 |
| 0.0156 | 34.0 | 204 | 0.0092 |
| 0.0159 | 35.0 | 210 | 0.0092 |
| 0.0153 | 36.0 | 216 | 0.0088 |
| 0.0157 | 37.0 | 222 | 0.0085 |
| 0.0149 | 38.0 | 228 | 0.0084 |
| 0.0135 | 39.0 | 234 | 0.0086 |
| 0.0135 | 40.0 | 240 | 0.0085 |
### Framework versions
- Transformers 4.45.0
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.20.3
|
dabrown/3c18dbaf-547c-4fe5-88e7-888d266f879d
|
dabrown
| 2025-03-03T14:22:39Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"starcoder2",
"axolotl",
"generated_from_trainer",
"base_model:bigcode/starcoder2-3b",
"base_model:adapter:bigcode/starcoder2-3b",
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-03-03T14:16:35Z |
---
library_name: peft
license: bigcode-openrail-m
base_model: bigcode/starcoder2-3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3c18dbaf-547c-4fe5-88e7-888d266f879d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.5.2`
```yaml
adapter: lora
base_model: bigcode/starcoder2-3b
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 012ab4813cc99fb8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/012ab4813cc99fb8_train_data.json
type:
field_input: evidence
field_instruction: question
field_output: SQL
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: true
hub_model_id: dabrown/3c18dbaf-547c-4fe5-88e7-888d266f879d
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: false
lora_inference_mode: true
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 1500
micro_batch_size: 2
mlflow_experiment_name: /tmp/012ab4813cc99fb8_train_data.json
model_type: AutoModelForCausalLM
modules_to_save: lm_head
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
peft_use_rslora: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: offline
wandb_name: b1e23278-252e-44d7-9491-1b28d344421c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: b1e23278-252e-44d7-9491-1b28d344421c
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 3c18dbaf-547c-4fe5-88e7-888d266f879d
This model is a fine-tuned version of [bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3037
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 198
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.5364 | 0.0051 | 1 | 0.9137 |
| 0.6391 | 0.2525 | 50 | 0.4269 |
| 0.6041 | 0.5051 | 100 | 0.3380 |
| 0.4258 | 0.7576 | 150 | 0.3037 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.3
- Pytorch 2.3.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Cleverlytics/offres_classification_bert_v1
|
Cleverlytics
| 2025-03-03T14:20:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:SI2M-Lab/DarijaBERT",
"base_model:finetune:SI2M-Lab/DarijaBERT",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-03-03T10:58:47Z |
---
library_name: transformers
base_model: SI2M-Lab/DarijaBERT
tags:
- generated_from_trainer
model-index:
- name: offres_classification_bert_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# offres_classification_bert_v1
This model is a fine-tuned version of [SI2M-Lab/DarijaBERT](https://huggingface.co/SI2M-Lab/DarijaBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0010
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 135 | 0.0265 |
| No log | 2.0 | 270 | 0.0045 |
| No log | 3.0 | 405 | 0.0063 |
| 0.1371 | 4.0 | 540 | 0.0023 |
| 0.1371 | 5.0 | 675 | 0.0030 |
| 0.1371 | 6.0 | 810 | 0.0020 |
| 0.1371 | 7.0 | 945 | 0.0013 |
| 0.0009 | 8.0 | 1080 | 0.0011 |
| 0.0009 | 9.0 | 1215 | 0.0010 |
| 0.0009 | 10.0 | 1350 | 0.0010 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 2.21.0
- Tokenizers 0.21.0
|
Bilal2912/red_lehenga_new
|
Bilal2912
| 2025-03-03T14:11:44Z | 6 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-02-28T10:11:43Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: lehenga
---
# Red_Lehenga_New
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `lehenga` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Bilal2912/red_lehenga_new', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Biolbe/swin-tiny-patch4-window7-224-finetuned-eurosat
|
Biolbe
| 2025-03-03T14:11:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-03-03T13:59:45Z |
---
library_name: transformers
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.98
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0544
- Accuracy: 0.98
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2069 | 1.0 | 190 | 0.0867 | 0.9681 |
| 0.1617 | 2.0 | 380 | 0.0670 | 0.9756 |
| 0.1308 | 3.0 | 570 | 0.0544 | 0.98 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
irishprancer/c7833d31-8fa5-42cd-bc32-c00da4978d68
|
irishprancer
| 2025-03-03T14:11:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-03T13:54:36Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/BlackSheep-Qwen-14B-i1-GGUF
|
mradermacher
| 2025-03-03T14:10:12Z | 0 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:TroyDoesAI/BlackSheep-Qwen-14B",
"base_model:quantized:TroyDoesAI/BlackSheep-Qwen-14B",
"license:cc-by-nd-4.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-03-03T11:42:43Z |
---
base_model: TroyDoesAI/BlackSheep-Qwen-14B
language:
- en
library_name: transformers
license: cc-by-nd-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/TroyDoesAI/BlackSheep-Qwen-14B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/BlackSheep-Qwen-14B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Qwen-14B-i1-GGUF/resolve/main/BlackSheep-Qwen-14B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Qwen-14B-i1-GGUF/resolve/main/BlackSheep-Qwen-14B.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Qwen-14B-i1-GGUF/resolve/main/BlackSheep-Qwen-14B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Qwen-14B-i1-GGUF/resolve/main/BlackSheep-Qwen-14B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Qwen-14B-i1-GGUF/resolve/main/BlackSheep-Qwen-14B.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Qwen-14B-i1-GGUF/resolve/main/BlackSheep-Qwen-14B.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Qwen-14B-i1-GGUF/resolve/main/BlackSheep-Qwen-14B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Qwen-14B-i1-GGUF/resolve/main/BlackSheep-Qwen-14B.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Qwen-14B-i1-GGUF/resolve/main/BlackSheep-Qwen-14B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Qwen-14B-i1-GGUF/resolve/main/BlackSheep-Qwen-14B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Qwen-14B-i1-GGUF/resolve/main/BlackSheep-Qwen-14B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Qwen-14B-i1-GGUF/resolve/main/BlackSheep-Qwen-14B.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Qwen-14B-i1-GGUF/resolve/main/BlackSheep-Qwen-14B.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Qwen-14B-i1-GGUF/resolve/main/BlackSheep-Qwen-14B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Qwen-14B-i1-GGUF/resolve/main/BlackSheep-Qwen-14B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Qwen-14B-i1-GGUF/resolve/main/BlackSheep-Qwen-14B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Qwen-14B-i1-GGUF/resolve/main/BlackSheep-Qwen-14B.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Qwen-14B-i1-GGUF/resolve/main/BlackSheep-Qwen-14B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 8.6 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Qwen-14B-i1-GGUF/resolve/main/BlackSheep-Qwen-14B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Qwen-14B-i1-GGUF/resolve/main/BlackSheep-Qwen-14B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Qwen-14B-i1-GGUF/resolve/main/BlackSheep-Qwen-14B.i1-Q4_1.gguf) | i1-Q4_1 | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Qwen-14B-i1-GGUF/resolve/main/BlackSheep-Qwen-14B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Qwen-14B-i1-GGUF/resolve/main/BlackSheep-Qwen-14B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/BlackSheep-Qwen-14B-i1-GGUF/resolve/main/BlackSheep-Qwen-14B.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
jiaxin-wen/alpaca-bc512-iter1-70b-incontext
|
jiaxin-wen
| 2025-03-03T14:08:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-03T13:34:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-GGUF
|
mradermacher
| 2025-03-03T14:08:35Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:nkpz/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT",
"base_model:quantized:nkpz/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-03T12:06:40Z |
---
base_model: nkpz/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/nkpz/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-GGUF/resolve/main/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-GGUF/resolve/main/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-GGUF/resolve/main/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-GGUF/resolve/main/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-GGUF/resolve/main/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-GGUF/resolve/main/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-GGUF/resolve/main/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-GGUF/resolve/main/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-GGUF/resolve/main/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-GGUF/resolve/main/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-GGUF/resolve/main/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT-GGUF/resolve/main/Qwen2.5-Dyanka-7B-Preview-Uncensored-DeLMAT.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
sh-sahil/fin
|
sh-sahil
| 2025-03-03T14:08:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-3B-Instruct-bnb-4bit",
"base_model:finetune:unsloth/Qwen2.5-3B-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-03T19:42:07Z |
---
base_model: unsloth/Qwen2.5-3B-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sh-sahil
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-3B-Instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
miladalsh/llama3-trained-journalist-on-gpt3-for-1-epochs
|
miladalsh
| 2025-03-03T14:07:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-03-01T16:50:25Z |
---
base_model: meta-llama/Meta-Llama-3-8B-Instruct
library_name: transformers
model_name: llama3-trained-journalist-on-gpt3-for-1-epochs
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for llama3-trained-journalist-on-gpt3-for-1-epochs
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="miladalsh/llama3-trained-journalist-on-gpt3-for-1-epochs", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/milad-it/training-llama-on-conversations/runs/84096v6d)
This model was trained with SFT.
### Framework versions
- TRL: 0.13.0
- Transformers: 4.48.0
- Pytorch: 2.5.1+cu121
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
zelk12/MT-Merge7-UW-gemma-2-9B
|
zelk12
| 2025-03-03T14:05:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:zelk12/MT-Merge7-U-gemma-2-MT1g7MT2g7-9B",
"base_model:merge:zelk12/MT-Merge7-U-gemma-2-MT1g7MT2g7-9B",
"base_model:zelk12/MT-Merge7-W-gemma-2-MT2g7MT1g7-9B",
"base_model:merge:zelk12/MT-Merge7-W-gemma-2-MT2g7MT1g7-9B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-03T13:59:26Z |
---
base_model:
- zelk12/MT-Merge7-U-gemma-2-MT1g7MT2g7-9B
- zelk12/MT-Merge7-W-gemma-2-MT2g7MT1g7-9B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/MT-Merge7-U-gemma-2-MT1g7MT2g7-9B](https://huggingface.co/zelk12/MT-Merge7-U-gemma-2-MT1g7MT2g7-9B)
* [zelk12/MT-Merge7-W-gemma-2-MT2g7MT1g7-9B](https://huggingface.co/zelk12/MT-Merge7-W-gemma-2-MT2g7MT1g7-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zelk12/MT-Merge7-U-gemma-2-MT1g7MT2g7-9B
- model: zelk12/MT-Merge7-W-gemma-2-MT2g7MT1g7-9B
merge_method: slerp
base_model: zelk12/MT-Merge7-U-gemma-2-MT1g7MT2g7-9B
dtype: bfloat16
parameters:
t: 0.25
```
|
ysdede/Phi-4-mm-inst-asr-turkish-3
|
ysdede
| 2025-03-03T14:05:25Z | 0 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"phi4mm",
"text-generation",
"generated_from_trainer",
"conversational",
"custom_code",
"base_model:microsoft/Phi-4-multimodal-instruct",
"base_model:finetune:microsoft/Phi-4-multimodal-instruct",
"license:mit",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2025-03-02T11:21:50Z |
---
library_name: transformers
license: mit
base_model: microsoft/Phi-4-multimodal-instruct
tags:
- generated_from_trainer
model-index:
- name: Phi-4-mm-inst-asr-turkish-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Phi-4-mm-inst-asr-turkish-3
This model is a fine-tuned version of [microsoft/Phi-4-multimodal-instruct](https://huggingface.co/microsoft/Phi-4-multimodal-instruct) on a 1300-hour Turkish audio dataset.
## Training Prompt
The model was initially fine-tuned using the original ASR prompt: "Transcribe the audio clip into text."
This prompt is language agnostic—as described in the model [paper](https://huggingface.co/microsoft/Phi-4-multimodal-instruct/blob/main/phi_4_mm.tech_report.02252025.pdf):
> The ASR prompt for Phi-4-Multimodal is “Transcribe the audio clip into text.”, which is
language agnostic. We notice that the model can learn to recognize in the target language perfectly
without providing language information, while Qwen2-audio and Gemini-2.0-Flash require the language
information in the prompt to obtain the optimal ASR performance.
However, we found that using a language-defining prompt, such as: "Transcribe the Turkish audio." leads to better performance.
See: [ysdede/Phi-4-mm-inst-asr-turkish](https://huggingface.co/ysdede/Phi-4-mm-inst-asr-turkish)
## Training Results
When benchmarked with the original ASR prompt "Transcribe the audio clip into text.", the evaluation results were as follows:
- **Before Fine-Tuning:**
- WER: 153.84
- CER: 82.57
- **After Fine-Tuning:**
- WER: 64.76
- CER: 29.85
## Inference
Load `generation_config` and `processor` from the base model as a quick fix to use the default generation settings.
*Note: The new models currently lack high-quality fine-tuning scripts. When saving a fine-tuned model using `model.save_pretrained()`, the processor configuration—including essential audio parameters—is not automatically saved. This omission can lead to errors during inference due to the model’s complex architecture. Loading these components from the base model ensures that all critical settings are properly included.*
```python
generation_config = GenerationConfig.from_pretrained(
'microsoft/Phi-4-multimodal-instruct', 'generation_config.json'
)
processor = AutoProcessor.from_pretrained(
'microsoft/Phi-4-multimodal-instruct', trust_remote_code=True
)
```
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.20.3
|
hatemestinbejaia/mmarco-Arabic-mMiniLML-cross-encoder-KD-v1-Sbce0.3-with-Tbce0.7epoch1
|
hatemestinbejaia
| 2025-03-03T14:04:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-03-03T14:04:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
omarwaleed523/gemma2_finetuned_newds
|
omarwaleed523
| 2025-03-03T14:04:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma2",
"trl",
"en",
"base_model:unsloth/gemma-2-9b-bnb-4bit",
"base_model:finetune:unsloth/gemma-2-9b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-03T14:03:56Z |
---
base_model: unsloth/gemma-2-9b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** omarwaleed523
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-9b-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
irishprancer/35ff4d15-88d8-419d-a445-e88213708efe
|
irishprancer
| 2025-03-03T14:04:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-03T08:45:45Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gajanhcc/colpali-ft
|
gajanhcc
| 2025-03-03T14:03:53Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:vidore/colpali-v1.2-hf",
"base_model:adapter:vidore/colpali-v1.2-hf",
"region:us"
] | null | 2025-03-03T14:03:44Z |
---
base_model: vidore/colpali-v1.2-hf
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1
|
gajanhcc/colpali_uf
|
gajanhcc
| 2025-03-03T14:03:38Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:vidore/colpali-v1.2-hf",
"base_model:adapter:vidore/colpali-v1.2-hf",
"license:gemma",
"region:us"
] | null | 2025-03-03T14:03:29Z |
---
library_name: peft
license: gemma
base_model: vidore/colpali-v1.2-hf
tags:
- generated_from_trainer
model-index:
- name: colpali_uf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# colpali_uf
This model is a fine-tuned version of [vidore/colpali-v1.2-hf](https://huggingface.co/vidore/colpali-v1.2-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.50.0.dev0
- Pytorch 2.5.1+cu124
- Datasets 2.21.0
- Tokenizers 0.21.0
|
ahmetsayko1/gkhn-w3-t1
|
ahmetsayko1
| 2025-03-03T14:03:35Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-03-03T14:03:32Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: gkhn_w3_t1
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# gkhn_w3_t1
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `gkhn_w3_t1` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
ujjwal1996/TinyLlama_lora_finetuned
|
ujjwal1996
| 2025-03-03T14:02:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-03T13:59:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Dolmer/GutBrainIE_NER_baseline
|
Dolmer
| 2025-03-03T13:54:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext",
"base_model:finetune:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-03-03T12:11:19Z |
---
library_name: transformers
license: mit
base_model: microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: GutBrainIE_NER_baseline
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GutBrainIE_NER_baseline
This model is a fine-tuned version of [microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3083
- Precision: 0.6802
- Recall: 0.6483
- F1: 0.6639
- Accuracy: 0.9188
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 92 | 0.4493 | 0.5809 | 0.4528 | 0.5089 | 0.8836 |
| No log | 2.0 | 184 | 0.3428 | 0.6849 | 0.5128 | 0.5865 | 0.9108 |
| No log | 3.0 | 276 | 0.2942 | 0.6480 | 0.6204 | 0.6339 | 0.9174 |
| No log | 4.0 | 368 | 0.2903 | 0.6780 | 0.6199 | 0.6476 | 0.9197 |
| No log | 5.0 | 460 | 0.2982 | 0.7156 | 0.6128 | 0.6602 | 0.9208 |
| 0.3823 | 6.0 | 552 | 0.2849 | 0.6767 | 0.6483 | 0.6622 | 0.9198 |
| 0.3823 | 7.0 | 644 | 0.2990 | 0.6705 | 0.6368 | 0.6532 | 0.9176 |
| 0.3823 | 8.0 | 736 | 0.3068 | 0.6756 | 0.6450 | 0.6600 | 0.9184 |
| 0.3823 | 9.0 | 828 | 0.3061 | 0.6812 | 0.6466 | 0.6635 | 0.9190 |
| 0.3823 | 10.0 | 920 | 0.3083 | 0.6802 | 0.6483 | 0.6639 | 0.9188 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.6.0+cu124
- Datasets 3.3.0
- Tokenizers 0.21.0
|
MMusername/calculator_model_test
|
MMusername
| 2025-03-03T13:53:30Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-03-03T11:28:39Z |
---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: calculator_model_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# calculator_model_test
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9187
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 512
- eval_batch_size: 512
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.3655 | 1.0 | 6 | 2.7160 |
| 2.3592 | 2.0 | 12 | 2.0612 |
| 1.8523 | 3.0 | 18 | 1.6791 |
| 1.6426 | 4.0 | 24 | 1.7810 |
| 1.7065 | 5.0 | 30 | 1.9239 |
| 1.6284 | 6.0 | 36 | 1.5635 |
| 1.5577 | 7.0 | 42 | 1.6272 |
| 1.5439 | 8.0 | 48 | 1.5541 |
| 1.5291 | 9.0 | 54 | 1.5177 |
| 1.4973 | 10.0 | 60 | 1.4948 |
| 1.4944 | 11.0 | 66 | 1.4746 |
| 1.4896 | 12.0 | 72 | 1.4857 |
| 1.4825 | 13.0 | 78 | 1.4613 |
| 1.4419 | 14.0 | 84 | 1.4153 |
| 1.4351 | 15.0 | 90 | 1.3887 |
| 1.3585 | 16.0 | 96 | 1.3681 |
| 1.3454 | 17.0 | 102 | 1.3402 |
| 1.3419 | 18.0 | 108 | 1.3145 |
| 1.2937 | 19.0 | 114 | 1.3013 |
| 1.3193 | 20.0 | 120 | 1.3074 |
| 1.2909 | 21.0 | 126 | 1.2146 |
| 1.2114 | 22.0 | 132 | 1.1740 |
| 1.1971 | 23.0 | 138 | 1.1854 |
| 1.175 | 24.0 | 144 | 1.2008 |
| 1.1643 | 25.0 | 150 | 1.0863 |
| 1.1425 | 26.0 | 156 | 1.1085 |
| 1.1197 | 27.0 | 162 | 1.0871 |
| 1.0919 | 28.0 | 168 | 1.1259 |
| 1.0798 | 29.0 | 174 | 1.0877 |
| 1.0937 | 30.0 | 180 | 1.0704 |
| 1.0625 | 31.0 | 186 | 1.0540 |
| 1.0688 | 32.0 | 192 | 1.0596 |
| 1.0514 | 33.0 | 198 | 1.0648 |
| 1.066 | 34.0 | 204 | 1.0433 |
| 1.0302 | 35.0 | 210 | 1.0098 |
| 1.0291 | 36.0 | 216 | 0.9593 |
| 1.0629 | 37.0 | 222 | 0.9388 |
| 0.9886 | 38.0 | 228 | 0.9264 |
| 0.9615 | 39.0 | 234 | 0.9218 |
| 0.9775 | 40.0 | 240 | 0.9187 |
### Framework versions
- Transformers 4.45.0
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.20.3
|
John6666/sudachi-xl-illustrious-v1-sdxl
|
John6666
| 2025-03-03T13:52:17Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"hentai",
"characters",
"clean anime style with thin to thick outlines",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-03-03T13:44:10Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- hentai
- characters
- clean anime style with thin to thick outlines
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1288125/sudachi-xl-illustrious?modelVersionId=1453425).
This model created by [Ikena](https://civitai.com/user/Ikena).
|
huihui-ai/Phi-4-mini-instruct-abliterated
|
huihui-ai
| 2025-03-03T13:52:08Z | 14 | 3 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"nlp",
"code",
"abliterated",
"uncensored",
"conversational",
"custom_code",
"multilingual",
"ar",
"zh",
"cs",
"da",
"nl",
"en",
"fi",
"fr",
"de",
"he",
"hu",
"it",
"ja",
"ko",
"no",
"pl",
"pt",
"ru",
"es",
"sv",
"th",
"tr",
"uk",
"base_model:microsoft/Phi-4-mini-instruct",
"base_model:finetune:microsoft/Phi-4-mini-instruct",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-02T17:12:46Z |
---
license: mit
license_link: https://huggingface.co/huihui-ai/Phi-4-mini-instruct-abliterated/resolve/main/LICENSE
language:
- "multilingual"
- "ar"
- "zh"
- "cs"
- "da"
- "nl"
- "en"
- "fi"
- "fr"
- "de"
- "he"
- "hu"
- "it"
- "ja"
- "ko"
- "no"
- "pl"
- "pt"
- "ru"
- "es"
- "sv"
- "th"
- "tr"
- "uk"
pipeline_tag: text-generation
base_model:
- microsoft/Phi-4-mini-instruct
tags:
- nlp
- code
- abliterated
- uncensored
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
library_name: transformers
---
# huihui-ai/Phi-4-mini-instruct-abliterated
This is an uncensored version of [microsoft/Phi-4-mini-instruct](https://huggingface.co/microsoft/Phi-4-mini-instruct) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it).
This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.
## Use with ollama
Ollama requires the [latest version](https://github.com/ollama/ollama/releases).
You can use [huihui_ai/phi4-mini-abliterated](https://ollama.com/huihui_ai/phi4-mini-abliterated) directly
```
ollama run huihui_ai/phi4-mini-abliterated
```
### Donation
If you like it, please click 'like' and follow us for more updates.
You can follow [x.com/support_huihui](https://x.com/support_huihui) to get the latest model information from huihui.ai.
##### Your donation helps us continue our further development and improvement, a cup of coffee can do it.
- bitcoin:
```
bc1qqnkhuchxw0zqjh2ku3lu4hq45hc6gy84uk70ge
```
|
distill-lab/distill-lab_nai-distill_00-01_combined_eagle.library_classification
|
distill-lab
| 2025-03-03T13:50:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"dinov2_with_registers",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-03-03T13:49:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
testliai-main/testliai-generate-exam-mistral-7b-instruct-v0.3-bnb-4bit-GGUF-q4_k_m
|
testliai-main
| 2025-03-03T13:49:06Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3",
"base_model:quantized:unsloth/mistral-7b-instruct-v0.3",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-03T13:47:22Z |
---
base_model: unsloth/mistral-7b-instruct-v0.3
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** testliai-main
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.