modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-27 00:42:13
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 499
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-27 00:40:00
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
TOMFORD79/Zata_34 | TOMFORD79 | 2025-05-02T09:49:36Z | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-05-02T09:37:39Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Tanchi00/tinyllama-lowram | Tanchi00 | 2025-05-02T09:42:33Z | 0 | 1 | null | [
"safetensors",
"llama",
"license:apache-2.0",
"region:us"
] | null | 2025-05-02T09:30:43Z | ---
license: apache-2.0
---
|
Wajdii98/Pixtral_12B_guide_mixed | Wajdii98 | 2025-05-02T09:42:22Z | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"llava",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T09:42:21Z | ---
base_model: unsloth/pixtral-12b-2409-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llava
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Wajdii98
- **License:** apache-2.0
- **Finetuned from model :** unsloth/pixtral-12b-2409-unsloth-bnb-4bit
This llava model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
maksf8486/16c86859-ce88-4d8e-a0c9-8bbdaea7c78f | maksf8486 | 2025-05-02T09:36:24Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:sethuiyer/Medichat-Llama3-8B",
"base_model:adapter:sethuiyer/Medichat-Llama3-8B",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-02T09:08:10Z | ---
library_name: peft
license: other
base_model: sethuiyer/Medichat-Llama3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 16c86859-ce88-4d8e-a0c9-8bbdaea7c78f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: sethuiyer/Medichat-Llama3-8B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ebaa36ac6b1bdb65_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ebaa36ac6b1bdb65_train_data.json
type:
field_input: reasoning (reasoning_content)
field_instruction: question
field_output: response (content)
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: false
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: maksf8486/16c86859-ce88-4d8e-a0c9-8bbdaea7c78f
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/ebaa36ac6b1bdb65_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 33f6b38d-f8bd-4301-b3c9-673be809902f
wandb_project: s56-2
wandb_run: your_name
wandb_runid: 33f6b38d-f8bd-4301-b3c9-673be809902f
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 16c86859-ce88-4d8e-a0c9-8bbdaea7c78f
This model is a fine-tuned version of [sethuiyer/Medichat-Llama3-8B](https://huggingface.co/sethuiyer/Medichat-Llama3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8439
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9092 | 0.0802 | 200 | 0.8439 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Kybalico/CalicoMix_EroILL | Kybalico | 2025-05-02T09:34:07Z | 0 | 0 | null | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2025-04-27T06:50:34Z | ---
license: cc-by-nc-sa-4.0
---
Copyright (c) 2025 Kybalico
Permission is granted to use, modify, and distribute this model **for non-commercial purposes only**.
You may **not**:
- Sell or monetize this model (weights, fine-tuned versions, etc.).
- Sell outputs (images, text, etc.) generated using this model.
Commercial use requires explicit written permission from the author.
|
aminlouhichi/gemma-3-cdg71 | aminlouhichi | 2025-05-02T09:32:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3_text",
"trl",
"en",
"base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T09:32:39Z | ---
base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** aminlouhichi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it-unsloth-bnb-4bit
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
arosidi/vit-base-oxford-iiit-pets | arosidi | 2025-05-02T09:29:25Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-05-02T09:19:29Z | ---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base-oxford-iiit-pets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-oxford-iiit-pets
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pcuenq/oxford-pets dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1865
- Accuracy: 0.9432
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.38 | 1.0 | 370 | 0.3080 | 0.9242 |
| 0.2037 | 2.0 | 740 | 0.2364 | 0.9350 |
| 0.1495 | 3.0 | 1110 | 0.2132 | 0.9459 |
| 0.1517 | 4.0 | 1480 | 0.2060 | 0.9432 |
| 0.1501 | 5.0 | 1850 | 0.2052 | 0.9432 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
Alessio-Borgi/all-mpnet-base-v2-margin-based-triplet-loss-finetuned-culture-1-epochs-enhanced_test | Alessio-Borgi | 2025-05-02T09:28:19Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:6551",
"loss:TripletLoss",
"arxiv:1908.10084",
"arxiv:1703.07737",
"base_model:sentence-transformers/all-mpnet-base-v2",
"base_model:finetune:sentence-transformers/all-mpnet-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-05-02T09:27:52Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:6551
- loss:TripletLoss
base_model: sentence-transformers/all-mpnet-base-v2
widget:
- source_sentence: 'comics creative work in which images and text convey information
such as narratives Comics is a medium used to express ideas with images, often
combined with text or other visual information. It typically takes the form of
a sequence of panels of images. Textual devices such as speech balloons, captions,
and onomatopoeia can indicate dialogue, narration, sound effects, or other information.
There is no consensus among theorists and historians on a definition of comics;
some emphasize the combination of images and text, some sequentiality or other
image relations, and others historical aspects such as mass reproduction or the
use of recurring characters. Cartooning and other forms of illustration are the
most common means of image-making in comics. Photo comics is a form that uses
photographic images. Common forms include comic strips, editorial and gag cartoons,
and comic books. Since the late 20th century, bound volumes such as graphic novels,
comic albums, and tankōbon have become increasingly common, along with webcomics
as well as scientific/medical comics. The history of comics has followed different
paths in different cultures. Scholars have posited a pre-history as far back as
the Lascaux cave paintings. By the mid-20th century, comics flourished, particularly
in the United States, western Europe (especially France and Belgium), and Japan.
The history of European comics is often traced to Rodolphe Töpffer''s cartoon
strips of the 1830s, while Wilhelm Busch and his Max and Moritz also had a global
impact from 1865 on, and became popular following the success in the 1930s of
strips and books such as The Adventures of Tintin. American comics emerged as
a mass medium in the early 20th century with the advent of newspaper comic strips;
magazine-style comic books followed in the 1930s, and the superhero genre became
prominent after Superman appeared in 1938. Histories of Japanese comics and cartooning
(manga) propose origins as early as the 12th century. Japanese comics are generally
held separate from the evolution of Euro-American comics, and Western comic art
probably originated in 17th-century Italy. Modern Japanese comic strips emerged
in the early 20th century, and the output of comic magazines and books rapidly
expanded in the post-World War II era (1945)– with the popularity of cartoonists
such as Osamu Tezuka. Comics has had a lowbrow reputation for much of their history,
but towards the end of the 20th century, they began to find greater acceptance
with the public and academics. The English term comics is used as a singular noun
when it refers to the medium itself (e.g. "Comics is a visual art form."), but
becomes plural when referring to works collectively (e.g. "Comics are popular
reading material."). The comics may be further adapted to animations (anime),
dramas, TV shows, movies. {''source'': ''R. C. Harvey, 2001 {{sfn|Harvey|2001|p|=|76}}'',
''width'': ''30em'', ''aliases'': [''sequential art'', ''ninth art'', ''comic
work'']} {''instance of'': ''literary genre'', ''subclass of'': ''literary work'',
''described by source'': ''Encyclopædia Britannica 11th edition'', ''different
from'': ''Synopsis''}'
sentences:
- 'Ediacaran biota enigmatic tubular and frond-shaped, mostly sessile organisms
that lived during the Ediacaran Period (ca. 635–542 Mya) The Ediacaran (; formerly
Vendian) biota is a taxonomic period classification that consists of all life
forms that were present on Earth during the Ediacaran Period (c. 635–538.8 Mya).
These were enigmatic tubular and frond-shaped, mostly sessile, organisms. Trace
fossils of these organisms have been found worldwide, and represent the earliest
known complex multicellular organisms. The term "Ediacara biota" has received
criticism from some scientists due to its alleged inconsistency, arbitrary exclusion
of certain fossils, and inability to be precisely defined. The Ediacaran biota
may have undergone evolutionary radiation in a proposed event called the Avalon
explosion, 575 million years ago. This was after the Earth had thawed from the
Cryogenian period''s extensive glaciation. This biota largely disappeared with
the rapid increase in biodiversity known as the Cambrian explosion. Most of the
currently existing body plans of animals first appeared in the fossil record of
the Cambrian rather than the Ediacaran. For macroorganisms, the Cambrian biota
appears to have almost completely replaced the organisms that dominated the Ediacaran
fossil record, although relationships are still a matter of debate. The organisms
of the Ediacaran Period first appeared around 600 million years ago and flourished
until the cusp of the Cambrian 538.8 million years ago, when the characteristic
communities of fossils vanished. A diverse Ediacaran community was discovered
in 1995 in Sonora, Mexico, and is approximately 555 million years in age, roughly
coeval with Ediacaran fossils of the Ediacara Hills in South Australia and the
White Sea on the coast of Russia. While rare fossils that may represent survivors
have been found as late as the Middle Cambrian (510–500 Mya), the earlier fossil
communities disappear from the record at the end of the Ediacaran leaving only
curious fragments of once-thriving ecosystems. Multiple hypotheses exist to explain
the disappearance of this biota, including preservation bias, a changing environment,
the advent of predators and competition from other life-forms. A sampling, reported
in 2018, of late Ediacaran strata across the scattered remnants of Baltica (<
560 Mya) suggests the flourishing of the organisms coincided with conditions of
low overall productivity with a very high percentage produced by bacteria, which
may have led to high concentrations of dissolved organic material in the oceans.
Determining where Ediacaran organisms fit in the tree of life has proven challenging;
it is not even established that most of them were animals, with suggestions that
they were lichens (fungus-alga symbionts), algae, protists known as foraminifera,
fungi or microbial colonies, or hypothetical intermediates between plants and
animals. The morphology and habit of some taxa (e.g. Funisia dorothea) suggest
relationships to Porifera or Cnidaria (e.g. Auroralumina). Kimberella may show
a similarity to molluscs, and other organisms have been thought to possess bilateral
symmetry, although this is controversial. Most macroscopic fossils are morphologically
distinct from later life-forms: they resemble discs, tubes, mud-filled bags or
quilted mattresses. Due to the difficulty of deducing evolutionary relationships
among these organisms, some palaeontologists have suggested that these represent
completely extinct lineages that do not resemble any living organism. Palaeontologist
Adolf Seilacher proposed a separate subkingdom level category Vendozoa (now renamed
Vendobionta) in the Linnaean hierarchy for the Ediacaran biota. If these enigmatic
organisms left no descendants, their strange forms might be seen as a "failed
experiment" in multicellular life, with later multicellular life evolving independently
from unrelated single-celled organisms. A 2018 study confirmed that one of the
period''s most-prominent and iconic fossils, Dickinsonia, included cholesterol,
suggesting affinities to animals, fungi, or red algae. {''aliases'': [''Ediacara
biota'', ''Vendozoa'']} {''subclass of'': ''animal'', ''described by source'':
''Brockhaus and Efron Encyclopedic Dictionary'', ''instance of'': ''taxon'', ''on
focus list of Wikimedia project'': ''Wikipedia:Vital articles/Level/4'', ''taxon
rank'': ''genus'', ''CITES Appendix'': ''Appendix II of CITES''}'
- 'political philosophy sub-discipline of philosophy and political science Political
philosophy, or political theory, is the philosophical study of government, addressing
questions about the nature, scope, and legitimacy of public agents and institutions
and the relationships between them. Its topics include politics, justice, liberty,
property, rights, law, and authority: what they are, if they are needed, what
makes a government legitimate, what rights and freedoms it should protect, what
form it should take, what the law is, and what duties citizens owe to a legitimate
government, if any, and when it may be legitimately overthrown, if ever. Political
theory also engages questions of a broader scope, tackling the political nature
of phenomena and categories such as identity, culture, sexuality, race, wealth,
human-nonhuman relations, ethics, religion, and more. Political philosophy is
a branch of philosophy, but it has also played a major part in political science,
within which a strong focus has historically been placed on both the history of
political thought and contemporary political theory (from normative political
theory to various critical approaches). {''boxes'': "{''Library resources box'':
[]}", ''count'': ''1'', ''aliases'': [''political thought'', ''philosophy of politics'']}
{''subclass of'': ''philosophy'', ''instance of'': ''branch of philosophy'', ''on
focus list of Wikimedia project'': ''Wikipedia:Vital articles/Level/4'', ''described
by source'': ''Brockhaus and Efron Encyclopedic Dictionary''}'
- 'São Paulo most populous city in Brazil São Paulo (; Portuguese: [sɐ̃w ˈpawlu]
; Portuguese for ''Saint Paul'') is the capital of the state of São Paulo, as
well as the most populous city in Brazil, the Americas, and both the Western and
Southern Hemispheres. Listed by the Globalization and World Cities Research Network
(GaWC) as an alpha global city, it exerts substantial international influence
in commerce, finance, arts, and entertainment. It is the largest urban area by
population outside Asia and the most populous Portuguese-speaking city in the
world. The city''s name honors Paul the Apostle and people from the city are known
as paulistanos. The city''s Latin motto is Non ducor, duco, which translates as
"I am not led, I lead." Founded in 1554 by Jesuit priests, the city was the center
of the bandeirantes settlers during Colonial Brazil, but it became a relevant
economic force only during the Brazilian coffee cycle in the mid-19th century
and later consolidated its role as the main national economic hub with industrialization
in Brazil in the 20th century, which made the city a cosmopolitan melting pot,
home to the largest Arab, Italian, and Japanese diasporas in the world, with ethnic
neighborhoods like Bixiga, Bom Retiro, and Liberdade, and people from more than
200 other countries. The city''s metropolitan area, Greater São Paulo, is home
to more than 20 million inhabitants and ranks as the most populous in Brazil and
one of the most populous in the world. The process of conurbation between the
metropolitan areas around Greater São Paulo also created the São Paulo Macrometropolis,
the first megalopolis in the Southern Hemisphere, with more than 30 million inhabitants.
São Paulo is the largest urban economy in Latin America and one of the world''s
major financial centres, representing around 10% of the Brazilian GDP and just
over a third of São Paulo state''s GDP. The city is the headquarters of B3, the
largest stock exchange of Latin America by market capitalization, and has several
financial districts, mainly in the areas around Paulista, Faria Lima and Berrini
avenues. São Paulo is home to 63% of established multinationals in Brazil, and
is the source of around one third of the Brazilian scientific production. Its
main university, the University of São Paulo, is often considered the best in
Brazil and Latin America. São Paulo is among the top 100 science and technology
clusters in the world. The metropolis is also home to several of the tallest skyscrapers
in Brazil, including the Alto das Nações, Platina 220, Figueira Altos do Tatuapé,
Mirante do Vale, Edifício Itália, Altino Arantes Building, North Tower and many
others. The city is one of the main cultural hubs in Latin America and it is home
to monuments, parks and museums such as the Latin American Memorial, Ibirapuera
Park, São Paulo Museum of Art, Pinacoteca, Cinemateca, Itaú Cultural, Museum of
Ipiranga, Catavento Museum, Football Museum, Museum of the Portuguese Language,
and the Museum of Image and Sound. São Paulo also holds relevant cultural events
like the São Paulo Jazz Festival, São Paulo Art Biennial, São Paulo Fashion Week,
Lollapalooza, Primavera Sound, Comic Con Experience and the São Paulo Gay Pride
Parade, the second-largest LGBT event in the world. São Paulo was also host of
many sporting events such as the 1950 and 2014 FIFA World Cups, the 1963 Pan American
Games, the São Paulo Indy 300 and the NFL Brazil Games in addition to hosting
the annual Brazilian Grand Prix of Formula One and the Saint Silvester Road Race.
{''name'': ''São Paulo'', ''official_name'': "Municipality of São Paulo<br/>''''Município
de São Paulo''''", ''settlement_type'': ''Municipality'', ''named_for'': ''Paul
the Apostle'', ''founder'': ''Manuel da Nóbrega and Joseph of Anchieta'', ''image_skyline'':
''{{Multiple image\n | perrow |=| 1/3/3/2\n | border |=| infobox\n
| total_width |=| 300\n | caption_align |=| center\n | image1 |=| Marginal_Pinheiros_e_Jockey_Club.jpg\n
| caption1 |=| Skyline from Itaim Bibi, highlighting Parque do Povo, Marginal
Pinheiros, Jockey Club and Pico do Jaraguá (background).\n | image2 |=| Catedral
da Sé em São Paulo.jpg\n | caption2 |=| São Paulo Cathedral \n | image3 |=|
Mausoléu_ao_soldado_constitucionalista_de_1932_04.jpg\n | caption3 |=| Obelisk
at Ibirapuera Park\n | image4 |=| Webysther_20190304150658_-_Parque_da_Independência.jpg\n
| caption4 |=| Ipiranga Museum at Independence Park\n | image5 |=| At_São_Paulo_2018_202.jpg\n
| caption5 |=| Altino Arantes Building\n | image6 |=| Estação_da_Luz_noite_(cropped).jpg\n
| caption6 |=| Luz Station\n | image7 |=| Ponte_estaiada_Octavio_Frias_-_Sao_Paulo_(cropped).jpg\n
| caption7 |=| Octávio Frias de Oliveira Bridge \n | image8 |=| Novo_MASP.jpg\n
| caption8 |=| MASP on Paulista Avenue\n | image9 |=| Teatro Municipal de São
Paulo 8.jpg\n | caption9 |=| Theatro Municipal\n | color |=| white}}'', ''image_flag'':
''Bandeira da cidade de São Paulo.svg'', ''image_shield'': ''Brasão da cidade
de São Paulo.svg'', ''blank_emblem_alt'': ''Wordmark'', ''nickname'': ''\''\''Selva
de Pedra\''\'' (Concrete Jungle); \''\''Terra da Garoa\''\'' (Drizzle Land); \''\''Sampa\''\'';
"Pauliceia Desvairada" (Crazy Pauliceia)'', ''motto'': ''"Non ducor, duco" {{spaces|
2}} <small>(Latin)<br />"I am not led, I lead"</small>'', ''image_map'': ''SaoPaulo
Municip SaoPaulo.svg'', ''mapsize'': ''250px'', ''map_caption'': ''Location in
the state of São Paulo'', ''pushpin_map'': ''Brazil#South America'', ''pushpin_relief'':
''1'', ''pushpin_mapsize'': ''250'', ''pushpin_map_caption'': ''Location in Brazil'',
''coordinates'': ''{{Coord|23|33|S|46|38|W|type:city_region:BR|display|=|it}}'',
''subdivision_type'': ''Country'', ''subdivision_name'': ''Brazil'', ''subdivision_type1'':
''State'', ''subdivision_name1'': ''São Paulo'', ''subdivision_type2'': ''Historic
countries'', ''subdivision_name2'': ''Kingdom of Portugal<br />United Kingdom
of Portugal, Brazil and the Algarves<br />Empire of Brazil'', ''established_title'':
''Founded'', ''established_date'': ''{{Start date and age|1554|1|25|df|=|yes}}'',
''government_type'': ''Mayor–council'', ''governing_body'': ''Municipal Chamber
of São Paulo'', ''leader_party'': ''MDB'', ''leader_title'': ''Mayor'', ''leader_name'':
''Ricardo Nunes'', ''leader_title1'': ''Vice Mayor'', ''leader_name1'': ''Mello
Araújo'', ''area_total_km2'': ''1,521.20'', ''area_total_sq_mi'': ''587.336'',
''area_metro_km2'': ''7,946.96'', ''area_metro_sq_mi'': ''3,068.338'', ''area_urban_km2'':
''11,698'', ''area_blank1_title'': ''Macrometropolis'', ''area_blank1_km2'': ''53,369.61'',
''elevation_m'': ''760'', ''elevation_ft'': ''2500'', ''population_total'': ''11,895,578'',
''population_as_of'': ''2024'', ''population_rank'': ''1st in the Americas<br/>1st
in Brazil'', ''population_density_km2'': ''7,819.86'', ''population_metro'': ''21,518,955
(Greater São Paulo)'', ''population_density_metro_km2'': ''2,714.45'', ''population_blank1_title'':
''Macrometropolis (Extended Metro)'', ''population_blank1'': ''34,500,000'', ''population_demonym'':
''Paulistan'', ''demographics_type1'': ''GDP (nominal) (metro area)'', ''demographics1_title1'':
''Year'', ''demographics1_info1'': ''2023'', ''demographics1_title2'': ''Total'',
''demographics1_info2'': ''$319.3 billion'', ''demographics_type2'': ''GDP (PPP,
constant 2015 values) (metro area)'', ''demographics2_title2'': ''Year'', ''demographics2_info2'':
''2023'', ''demographics2_title3'': ''Total'', ''demographics2_info3'': ''$531.3
billion'', ''postal_code_type'': ''Postal Code (CEP)'', ''postal_code'': ''01000-000'',
''unit_pref'': ''Metric'', ''area_code'': ''+55 11'', ''website'': ''{{URL|https://capital.sp.gov.br}}'',
''timezone'': ''BRT'', ''utc_offset'': ''−03:00'', ''timezone_DST'': ''BRST'',
''utc_offset_DST'': ''−02:00'', ''blank_name'': "''''''HDI'''''' (2010)", ''blank_info'':
''0.805 – <span style="color:#090">very high</span>'', ''aliases'': [''Sao Paulo'',
''São Paulo city'']} {''instance of'': ''city'', ''described by source'': ''Encyclopædia
Britannica 11th edition'', ''subclass of'': ''city'', ''located in time zone'':
''UTC+01:00''}'
- source_sentence: 'square open public spaces in cities or towns, usually rectilinear,
surrounded by buildings, and often located at the junction of two or more thoroughfares
A town square (or public square, urban square, or simply square), also called
a plaza or piazza, is an open public space commonly found in the heart of a traditional
town, and which is used for community gatherings. A square in a city may be called
a city square. Related concepts are the civic center, the market square and the
village green. Most squares are hardscapes suitable for open markets, concerts,
political rallies, and other events that require firm ground. They are not necessarily
a true geometric square. Being centrally located, town squares are usually surrounded
by small shops such as bakeries, meat markets, cheese stores, and clothing stores.
At their center is often a well, monument, statue or other feature. Those with
fountains are sometimes called fountain squares. The term "town square" (especially
via the term "public square") is synonymous with the politics of many cultures,
and the names of a certain town squares, such as the Euromaidan or Red Square,
have become symbolic of specific political events throughout history. {''aliases'':
[''public square'', ''city square'', ''town square'', ''plaza'', ''piazza'', ''urban
square'', ''maydon'', ''pedestrian plaza'']} {''subclass of'': ''geographic location'',
''on focus list of Wikimedia project'': ''Wikipedia:Vital articles/Level/4'',
''described by source'': ''Encyclopædia Britannica 11th edition'', ''instance
of'': ''geographic location'', ''properties for this type'': ''located in the
administrative territorial entity''}'
sentences:
- 'Campus of the University of Tokyo Japanese historical campus The campus of the
University of Tokyo is the location of the first modern Japanese university. The
campus is of historical note for two reasons. First, it was not damaged by air
raids during World War II. Second, many university buildings have been declared
National Treasures of Japan as they are examples of historic architectural design.
This article focuses on registered cultural heritage. {''note'': ''infobox not
present in Wikipedia''} {''instance of'': ''building'', ''subclass of'': ''building'',
''country'': ''Germany'', ''located in the administrative territorial entity'':
''Monmouth'', ''architectural style'': ''Art Nouveau architecture'', ''significant
event'': ''construction (economic activity)''}'
- 'Amoriguard type of paint Amoriguard is a water-based paint with fillers based
on recycled industrial waste. The colour has an effective 70% mass of solids,
which occupy a volume of at least 55% excluding water. It was invented in South
Africa by Mulalo Doyoyo and co-developed by Ryan Purchase. Substances in the paint
such as volatile organic compounds, ammonia, formaldehyde, lead, alkyl phenol
ethoxylate and glycol are low in quantity or absent. It is manufactured below
critical pigment volume concentration (CPVC) which means that most voids between
pigment particles in the dried film are filled with solid particles as opposed
to air. The paint is hydrophobic and chemical-resistant. == References == {''note'':
''infobox not present in Wikipedia''} {''subclass of'': ''building material'',
''instance of'': ''building material'', ''described by source'': ''Encyclopædia
Britannica 11th edition'', ''on focus list of Wikimedia project'': ''Wikipedia:Vital
articles/Level/4'', ''made from material'': ''concrete''}'
- 'Atellan Farce genre of comedy from Latin theatre The Atellan Farce (Latin: Atellanae
Fabulae or Fabulae Atellanae, "favola atellana"; Atellanicum exhodium, "Atella
comedies"), also known as the Oscan Games (Latin: ludi Osci, "Oscan plays"), were
masked improvised farces in Ancient Rome. The Oscan athletic games were very popular,
and usually preceded by longer pantomime plays. The origin of the Atellan Farce
is uncertain, but the farces are similar to other forms of ancient theatre such
as the South Italian Phlyakes, the plays of Plautus and Terence, and Roman mime.
Most historians believe the name is derived from Atella, an Oscan town in Campania.
The farces were written in Oscan and imported to Rome in 391 BC. In later Roman
versions, only the ridiculous characters speak their lines in Oscan, while the
others speak in Latin. {''aliases'': [''ludi Osci'']} {''instance of'': ''theatrical
genre'', ''country'': ''Indonesia'', ''subclass of'': ''theatre''}'
- source_sentence: 'Gone with the Wind 1939 film by Victor Fleming Gone with the Wind
is a 1939 American epic historical romance film adapted from the 1936 novel by
Margaret Mitchell. The film was produced by David O. Selznick of Selznick International
Pictures and directed by Victor Fleming. Set in the American South against the
backdrop of the American Civil War and the Reconstruction era, the film tells
the story of Scarlett O''Hara (Vivien Leigh), the strong-willed daughter of a
Georgia plantation owner, following her romantic pursuit of Ashley Wilkes (Leslie
Howard), who is married to his cousin, Melanie Hamilton (Olivia de Havilland),
and her subsequent marriage to Rhett Butler (Clark Gable). The film had a troubled
production. The start of filming was delayed for two years until January 1939
because Selznick was determined to secure Gable for the role of Rhett, and filming
concluded in July. The role of Scarlett was challenging to cast, and 1,400 unknown
women were interviewed for the part. Sidney Howard''s original screenplay underwent
many revisions by several writers to reduce it to a suitable length. The original
director, George Cukor, was fired shortly after filming began and was replaced
by Fleming, who in turn was briefly replaced by Sam Wood while taking some time
off due to exhaustion. Post-production concluded in November 1939, just a month
before its premiere. It received generally positive reviews upon its release on
December 15, 1939. While the casting was widely praised, the long running time
received criticism. At the 12th Academy Awards, Gone with the Wind received ten
Academy Awards (eight competitive, two honorary) from thirteen nominations, including
wins for Best Picture, Best Director (Fleming), Best Adapted Screenplay (posthumously
awarded to Sidney Howard), Best Actress (Leigh), and Best Supporting Actress (Hattie
McDaniel, becoming the first African American to win an Academy Award). It set
records for the total number of wins and nominations at the time. Gone with the
Wind was immensely popular when first released. It became the highest-earning
film made up to that point and held the record for over a quarter of a century.
When adjusted for monetary inflation, it is still the highest-grossing film in
history. It was re-released periodically throughout the 20th century and became
ingrained in popular culture. Although the film has been criticized as historical
negationism, glorifying slavery and the Lost Cause of the Confederacy myth, it
has been credited with triggering changes in the way in which African Americans
were depicted cinematically. Gone with the Wind is regarded as one of the greatest
films of all time, and in 1989, became one of the twenty-five inaugural films
selected for preservation in the United States National Film Registry. {''name'':
''Gone with the Wind'', ''alt'': ''A film poster showing a man and a woman in
a passionate embrace.'', ''caption'': ''Theatrical release poster'', ''director'':
''Victor Fleming'', ''producer'': ''David O. Selznick'', ''screenplay'': ''Sidney
Howard'', ''based_on'': "{{based on|''''Gone with the Wind''''|Margaret Mitchell}}",
''starring'': "{{plainlist|\n* Clark Gable\n* Vivien Leigh |<!-- Regardless of
where Leigh''s name appears on the posters, she receives second billing in the
film itself -->|\n* Leslie Howard\n* Olivia de Havilland\n|<!-- DO NOT ADD HATTIE
McDANIEL TO THIS LIST. SHE WAS *NOT* STAR BILLED. This is not a matter of editorial
discretion but of how the actors were credited. Only four actors received star
billing: Gable, Leigh, Howard, and de Haviland. This is corroborated by the opening
credits: https://www.youtube.com/watch?v=fFNuDkQxHGA&t=60. Regardless of whether
we agree or not, it is not Wikipedia''s place to revise history. -->}} * Leslie
Howard\n* Olivia de Havilland", ''music'': ''Max Steiner'', ''cinematography'':
''Ernest Haller'', ''editing'': ''{{plainlist|\n* Hal C. Kern\n* James E. Newcom}}'',
''production_companies'': ''{{Plainlist|\n* Selznick International Pictures\n*
Metro-Goldwyn-Mayer}}'', ''distributor'': "Loew''s Inc. {{refn|Loews was the parent
company of MGM.|ref|{{cite book |last1=Gomery |first1=Douglas |last2=Pafort-Overduin
|first2=Clara |title=Movie History: A Survey |edition=2nd |year=2011 |publisher=Taylor
& Francis |isbn=9781136835254 |page=[https://books.google.com/books?id=s0PP2Gm8xNcC&q=Loews+144&pg=PA144
144]}}|</ref>|group|=|nb}}", ''released'': ''{{film date|1939|12|15|Atlanta premiere}}'',
''runtime'': "{{plainlist|\n* 221 minutes\n* 234–238 minutes (with overture,
intermission, entr''acte, and exit music)}}", ''country'': ''United States'',
''language'': ''English'', ''budget'': ''$3.85 million'', ''gross'': ''>$390 million'',
''aliases'': [''GWTW'']} {''instance of'': ''film'', ''color'': ''color'', ''distribution
format'': ''video on demand'', ''CNC film rating (France)'': ''no age restriction'',
''original language of film or TV show'': ''English'', ''FSK film rating'': ''FSK
12'', ''distributed by'': ''Netflix'', ''assessment'': ''Bechdel test'', ''country
of origin'': ''United States'', ''genre'': ''drama film''}'
sentences:
- 'Shibuya Goldfish Japanese manga series Shibuya Goldfish (Japanese: 渋谷金魚, Hepburn:
Shibuya Kingyo) is a Japanese manga series written and illustrated by Hiroumi
Aoi. It was serialized in Square Enix''s Gangan Joker from September 2016 to April
2021 and published in 11 volumes. {''ja_kanji'': ''渋谷金魚'', ''caption'': ''Cover
of the first volume'', ''genre'': ''Horror''} {''instance of'': ''manga'', ''country
of origin'': ''Japan'', ''language of work or name'': ''Japanese'', ''intended
public'': ''seinen'', ''subclass of'': ''manga'', ''genre'': ''romance anime and
manga'', ''original language of film or TV show'': ''Japanese''}'
- 'skald poet in the courts of Scandinavian rulers during the Viking Age A skald,
or skáld (Old Norse: [ˈskɔːld]; Icelandic: [ˈskault], meaning "poet"), is one
of the often named poets who composed skaldic poetry, one of the two kinds of
Old Norse poetry in alliterative verse, the other being Eddic poetry. Skaldic
poems were traditionally composed to honor kings, but were sometimes ex tempore.
They include both extended works and single verses (lausavísur). They are characteristically
more ornate in form and diction than eddic poems, employing many kennings, which
require some knowledge of Norse mythology, and heiti, which are formal nouns used
in place of more prosaic synonyms. Dróttkvætt metre is a type of skaldic verse
form that most often use internal rhyme and alliteration. More than 5,500 skaldic
verses have survived, preserved in more than 700 manuscripts, including in several
sagas and in Snorri Sturluson''s Prose Edda, a handbook of skaldic composition
that led to a revival of the art. Many of these verses are fragments of originally
longer works, and the authorship of many is unknown. The earliest known skald
from whom verses survive is Bragi Boddason, known as Bragi the Old, a Norwegian
skald of the first half of the 9th century. Most known skalds were attached to
the courts of Norwegian kings during the Viking Age, and increasingly were Icelanders.
The subject matter of their extended poems was sometimes mythical before the conversion
to Christianity, thereafter usually historical and encomiastic, detailing the
deeds of the skald''s patron. The tradition continued into the Late Middle Ages.
The standard edition of the skaldic poetic corpus, Den norsk-islandske skjaldedigtning,
was edited by Finnur Jónsson and published in 1908–1915. A new edition was prepared
online by the Skaldic Poetry of the Scandinavian Middle Ages project and began
publication in 2007. {''note'': ''infobox not present in Wikipedia''} {''instance
of'': ''human'', ''occupation'': ''poet'', ''sex or gender'': ''male'', ''copyright
status as a creator'': ''copyrights on works have expired'', ''described by source'':
''Brockhaus and Efron Encyclopedic Dictionary'', ''subclass of'': ''poet'', ''country
of citizenship'': ''France''}'
- 'Deutscher Jugendliteraturpreis German literary award for children''s and young
adult literature (1956-) The Deutscher Jugendliteraturpreis (German Youth Literature
Award) is an annual award established in 1956 by the Federal Ministry of Family
Affairs, Senior Citizens, Women and Youth to recognise outstanding works of children''s
and young adult literature. It is Germany''s only state-funded literary award.
In the past, authors from many countries have been recognised, including non-German
speakers. {''name'': ''Deutscher Jugendliteraturpreis'', ''caption'': ''The bronze
statuette "Momo" (named after the Michael Ende book of the same name) given to
the winners of the "Deutscher Jugendliteraturpreis", created by Detlef Kraft'',
''awarded_for'': "Outstanding children''s literature", ''presenter'': ''Federal
Ministry of Family Affairs, Senior Citizens, Women and Youth'', ''country'': ''Germany'',
''year'': ''1956'', ''year2'': ''2017'', ''website'': ''[http://www.djlp.jugendliteratur.org/
djlp.jugendliteratur.org]'', ''aliases'': [''German Youth Literature Award'']}
{''instance of'': ''literary award'', ''subclass of'': ''literary award'', ''country'':
''Germany'', ''described by source'': ''Dutch Heights''}'
- source_sentence: 'New South Wales Premier''s Literary Awards Literary prizes awarded
by the New South Wales state government in Australia The New South Wales Premier''s
Literary Awards, also known as the NSW Premier''s Literary Awards, were first
awarded in 1979. They are among the richest literary awards in Australia. Notable
prizes include the Christina Stead Prize for Fiction, the Kenneth Slessor Prize
for Poetry, and the Douglas Stewart Prize for Non-Fiction. As of 2019, the Awards
are presented by the NSW Government and administered by the State Library of New
South Wales in association with Create NSW, with support of Multicultural NSW
and the University of Technology Sydney (UTS). Total prize money in 2019 was up
to A$305,000, with eligibility limited to writers, translators and illustrators
with Australian citizenship or permanent resident status. {''aliases'': ["New
South Wales Premier''s Literary Awards"]} {''instance of'': ''literary award'',
''subclass of'': ''literary award'', ''country'': ''Germany'', ''described by
source'': ''Dutch Heights''}'
sentences:
- 'Russia: War, Peace and Diplomacy 2005 essay collection book Russia: War, Peace
and Diplomacy is a 2005 book edited by Mark Erickson and Ljubica Erickson. The
book is a collection of essays from a number of renowned historians including
Omer Bartov, Jürgen Förster, David Glantz, Antony Beevor, Norman Stone, Hew Strachan
and Robert Service. The book was written in honour of historian John Erickson
and also includes essays from his colleagues in the United Kingdom, United States
and Russia. The foreword was written by Sir Michael Howard. {''name'': ''Russia:
War, Peace and Diplomacy'', ''published'': ''2005 (Weidenfeld & Nicolson)'', ''isbn'':
''9780297849131'', ''editors'': ''Mark Eerickson and Ljubica Erickson''} {''instance
of'': ''book'', ''language of work or name'': ''English'', ''subclass of'': ''book'',
''country of origin'': ''United Kingdom'', ''publisher'': ''White Wolf Publishing'',
''copyright status'': ''copyrighted'', ''author'': ''Derek Lambert'', ''described
by source'': ''Meyers Konversations-Lexikon, 4th edition (1885–1890)''}'
- 'soybean oil oil from the seeds of Glycine max Soybean oil (British English: soyabean
oil) is a vegetable oil extracted from the seeds of the soybean (Glycine max).
It is one of the most widely consumed cooking oils and the second most consumed
vegetable oil. As a drying oil, processed soybean oil is also used as a base for
printing inks (soy ink) and oil paints. {''caption'': ''Bottles of soybean oil'',
''tradename'': ''Nutrilipid, Intralipid, others'', ''Drugs.com'': ''{{drugs.com|ppa|fat-emulsion-plant-based}}'',
''DailyMedID'': ''Soybean_oil'', ''pregnancy_AU'': ''B3'', ''pregnancy_US'': ''C'',
''routes_of_administration'': ''Intravenous (IV)'', ''ATC_prefix'': ''none'',
''CAS_number'': ''8001-22-7'', ''DrugBank'': ''DB09422'', ''UNII'': ''241ATL177A'',
''aliases'': [''soya oil'', ''soy bean oil'']} {''instance of'': ''ingredient'',
''subclass of'': ''ingredient'', ''has part(s)'': ''water''}'
- 'transport in Hungary transport in Hungary Transport in Hungary relies on several
main modes, including transport by road, rail, air and water. {''aliases'': [''transportation
in Hungary'', ''Hungary transport'']} {''subclass of'': ''transport''}'
- source_sentence: 'Gomal River river in Afghanistan and Pakistan The Gomal (Urdu:
دریائے گومل, Pashto: ګومل سیند، ګومل دریاب) is a 400-kilometre-long (250 mi) river
in Afghanistan and Pakistan. It rises in northern Afghanistan''s Paktika Province
and joins the Indus River 20 miles south of Dera Ismail Khan, in Pakistan''s Khyber
Pakhtunkhwa province. Gomal University in Dera Ismail Khan and Gomal District
in Afghanistan''s Paktika province are named after the river. {''name'': ''Gomal'',
''map'': ''{{maplink|frame|=|yes|frame-align|=|left|type|=|line|id|=|Q8501|text|=|Interactive
Map}}'', ''map_size'': ''250px'', ''subdivision_type1'': ''Countries'', ''subdivision_name1'':
''Afghanistan and Pakistan'', ''subdivision_type2'': ''Provinces'', ''subdivision_name2'':
''{{hlist|Paktika|Balochistan|Khyber Pakhtunkhwa}}'', ''length'': ''{{convert|400|km|mi|abbr|=|on}}'',
''source1_location'': ''Katawaz Region, Gomal District, Paktika Province, Afghanistan'',
''source1_coordinates'': ''{{coord|32.502974|N|68.901294|E}}'', ''mouth'': ''Indus
River'', ''mouth_location'': ''Dera Ismail Khan, Dera Ismail Khan District, Khyber
Pakhtunkhwa, Pakistan'', ''mouth_coordinates'': ''{{coord|31|36|53|N|70|50|46|E}}'',
''tributaries_left'': ''Wana Khwar'', ''tributaries_right'': ''Zhob River''} {''instance
of'': ''river'', ''country'': ''Spain'', ''drainage basin'': ''Tagus Basin'',
''described by source'': ''Brockhaus and Efron Encyclopedic Dictionary'', ''mouth
of the watercourse'': ''Tagus River'', ''located in the administrative territorial
entity'': ''Community of Madrid''}'
sentences:
- 'Thunnus genus of fishes Thunnus is a genus of ocean-dwelling, ray-finned bony
fish from the mackerel family, Scombridae. More specifically, Thunnus is one of
five genera which make up the tribe Thunnini – a tribe that is collectively known
as the tunas. Also called the true tunas or real tunas, Thunnus consists of eight
species of tuna (more than half of the overall tribe), divided into two subgenera.
Their coloring, metallic blue on top and shimmering silver-white on the bottom,
helps camouflage them from above and below. Atlantic bluefin tuna, the largest
member of this genus, can grow to 15 feet (4.6 m) long and weigh up to 1,500 pounds
(680 kg). All tunas are extremely strong, muscular swimmers, and the yellowfin
tuna is known to reach speeds of up to 50 miles per hour (80 km/h) when pursuing
prey. As with all tunas, members of this genus are warm-blooded, which is a rare
trait among fish; this enables them to tolerate cold waters and to dive to deeper
depths. Bluefin tunas, for example, are found in Newfoundland and Iceland, and
also in the tropical waters of the Gulf of Mexico and the Mediterranean Sea, where
some individuals go each year to spawn. Due to overfishing, the range of this
genus has declined significantly, having been effectively extirpated from the
Black Sea, for example. {''name'': ''True tunas'', ''fossil_range'': ''{{Fossilrange|Tertiary|holocene}}'',
''image_caption'': ''Yellowfin tuna'', ''taxon'': ''Thunnus'', ''authority'':
''South, 1845'', ''type_species'': "''''Scomber thynnus''''", ''type_species_authority'':
''Linnaeus, 1758'', ''subdivision_ranks'': ''Subgenus'', ''subdivision'': "* ''''T.
(Thunnus)'''' (bluefin group)\n* ''''T. (Neothunnus)'''' (yellowfin group)", ''synonyms'':
"*''''Albacora'''' <small>Jordan, 1888</small>\n*''''Germo'''' <small>Jordan,
1888</small>\n*''''Thynnus'''' <small>Aguilera, 2020</small>\n*''''Kishinoella''''
<small>Jordan & Hubbs, 1925</small>\n*''''Neothunnus'''' <small>Kishinouye, 1923</small>\n*''''Orcynus''''
<small>Cuvier, 1816</small>\n*''''Parathunnus'''' <small>Kishinouye, 1923</small>\n*''''Semathunnus''''
<small>Fowler, 1933</small>", ''aliases'': [''tuna'', ''tunafish'', ''tuna fish'',
''tunas'', ''tuna fishes'', ''tunafishes'', ''tunny'', ''tunnies'']} {''subclass
of'': ''fish'', ''instance of'': ''taxon'', ''taxon rank'': ''species'', ''described
by source'': ''Encyclopædia Britannica 11th edition'', ''IUCN conservation status'':
''Least Concern'', ''maintained by WikiProject'': ''WikiProject Invasion Biology'',
''parent taxon'': ''Elasmobranchs''}'
- 'neighborhood in New York City neighborhood located within one of the five boroughs
of the City of New York The neighborhoods in New York City are located within
the five boroughs of the City of New York. Their names and borders are not officially
defined, and they change from time to time. {''title'': ''Articles and topics
related to neighborhoods in New York City'', ''state'': ''collapsed'', ''list1'':
''{{Bronx}} {{Brooklyn}} {{Manhattan}} {{Queens}} {{Staten Island}} {{New York
City}}'', ''aliases'': [''New York City neighborhood'', ''neighborhood in New
York'', ''neighborhoods in New York City'']} {''instance of'': ''neighborhood'',
''subclass of'': ''neighborhood'', ''country'': ''United States'', ''located in
time zone'': ''UTC+02:00'', ''local dialing code'': ''026'', ''licence plate code'':
''FR''}'
- 'National Union Party 1864–1868 Republican and Unionist political alliance The
National Union Party, commonly the Union Party or Unionists, was a wartime coalition
of Republicans, War Democrats, and border state Unconditional Unionists that supported
the Lincoln Administration during the American Civil War. It held the 1864 National
Union Convention that nominated Abraham Lincoln for president and Andrew Johnson
for vice president in the 1864 United States presidential election. Following
Lincoln''s successful re-election and assassination, Johnson tried and failed
to sustain the Union Party as a vehicle for his presidential ambitions. The coalition
did not contest the 1868 elections, but the Republican Party continued to use
the "Union Republican" label throughout the period of Reconstruction. Abraham
Lincoln won the 1860 United States presidential election, receiving 180 electoral
votes and 53% of the popular vote in the free states; opposition to Lincoln was
divided, with most northern Democrats voting for Illinois Senator Stephen Douglas.
Following the Republican victory, Douglas strongly condemned secession and publicly
supported the federal government''s efforts to preserve the Union. Pro-administration
War Democrats in states like Ohio sought to cooperate with Republicans through
the formation of Union parties in opposition to the anti-administration Peace
faction. Elsewhere, the Union Party appeared as a coalition of conservative Republicans
and Democrats opposed by the Radical Republicans. Besides allowing voters of diverse
pre-war partisan allegiances to unite under a common banner, the Union label served
a valuable propaganda purpose by implying the coalition''s opponents were dis-unionists.
The preeminent policy of the National Union Party was the preservation of the
Union by the prosecution of the war to its ultimate conclusion. They rejected
proposals for a negotiated peace as humiliating and ultimately ruinous to the
authority of the national government. The party''s 1864 platform called for the
abolition of slavery by constitutional amendment, a "liberal and just" immigration
policy, completion of the transcontinental railroad, and condemned the French
intervention in Mexico as dangerous to republicanism. {''colorcode'': ''{{party
color|National Union Party (United States)}}'', ''name'': ''National Union Party'',
''logo'': ''Republican presidential ticket 1864b.jpg'', ''logo_size'': ''200px'',
''caption'': ''Campaign banner for the 1864 National Union ticket'', ''leader1_title'':
''Leaders'', ''leader1_name'': ''Abraham Lincoln<br>Andrew Johnson'', ''foundation'':
''{{start date and age|1861}}'', ''dissolution'': ''{{end date and age|1868}}'',
''merger'': ''Republican Party<br>War Democrats<br>Unconditional Union Party'',
''merged'': ''Republican Party<br>Democratic Party'', ''ideology'': ''American
nationalism<br />Unionism<br>Abolitionism'', ''colors'': ''{{nowrap|color box|#CC0C2F|border|=|darkgray|
Red |color box|#FFFFFF|border|=|darkgray| White |color box|#002C77|border|=|darkgray|
Blue}} <br> {{color box|#CC0C2F|border|=|darkgray}} Red {{color box|#FFFFFF|border|=|darkgray}}
White {{color box|#002C77|border|=|darkgray}} Blue {{small|(United States national
colors)}}'', ''country'': ''the United States'', ''aliases'': [''Union Party'']}
{''instance of'': ''political party'', ''number of seats in assembly'': ''{"amount":
"+3", "unit": "1"}'', ''country'': ''Greece'', ''political alignment'': ''centre-left'',
''subclass of'': ''political party'', ''headquarters location'': ''Athens''}'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on sentence-transformers/all-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) <!-- at revision 12e86a3c702fc3c50205a8db88f0ec7c0b6b94a0 -->
- **Maximum Sequence Length:** 384 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Alessio-Borgi/all-mpnet-base-v2-margin-based-triplet-loss-finetuned-culture-1-epochs-enhanced_test")
# Run inference
sentences = [
"Gomal River river in Afghanistan and Pakistan The Gomal (Urdu: دریائے گومل, Pashto: ګومل سیند، ګومل دریاب) is a 400-kilometre-long (250 mi) river in Afghanistan and Pakistan. It rises in northern Afghanistan's Paktika Province and joins the Indus River 20 miles south of Dera Ismail Khan, in Pakistan's Khyber Pakhtunkhwa province. Gomal University in Dera Ismail Khan and Gomal District in Afghanistan's Paktika province are named after the river. {'name': 'Gomal', 'map': '{{maplink|frame|=|yes|frame-align|=|left|type|=|line|id|=|Q8501|text|=|Interactive Map}}', 'map_size': '250px', 'subdivision_type1': 'Countries', 'subdivision_name1': 'Afghanistan and Pakistan', 'subdivision_type2': 'Provinces', 'subdivision_name2': '{{hlist|Paktika|Balochistan|Khyber Pakhtunkhwa}}', 'length': '{{convert|400|km|mi|abbr|=|on}}', 'source1_location': 'Katawaz Region, Gomal District, Paktika Province, Afghanistan', 'source1_coordinates': '{{coord|32.502974|N|68.901294|E}}', 'mouth': 'Indus River', 'mouth_location': 'Dera Ismail Khan, Dera Ismail Khan District, Khyber Pakhtunkhwa, Pakistan', 'mouth_coordinates': '{{coord|31|36|53|N|70|50|46|E}}', 'tributaries_left': 'Wana Khwar', 'tributaries_right': 'Zhob River'} {'instance of': 'river', 'country': 'Spain', 'drainage basin': 'Tagus Basin', 'described by source': 'Brockhaus and Efron Encyclopedic Dictionary', 'mouth of the watercourse': 'Tagus River', 'located in the administrative territorial entity': 'Community of Madrid'}",
'National Union Party 1864–1868 Republican and Unionist political alliance The National Union Party, commonly the Union Party or Unionists, was a wartime coalition of Republicans, War Democrats, and border state Unconditional Unionists that supported the Lincoln Administration during the American Civil War. It held the 1864 National Union Convention that nominated Abraham Lincoln for president and Andrew Johnson for vice president in the 1864 United States presidential election. Following Lincoln\'s successful re-election and assassination, Johnson tried and failed to sustain the Union Party as a vehicle for his presidential ambitions. The coalition did not contest the 1868 elections, but the Republican Party continued to use the "Union Republican" label throughout the period of Reconstruction. Abraham Lincoln won the 1860 United States presidential election, receiving 180 electoral votes and 53% of the popular vote in the free states; opposition to Lincoln was divided, with most northern Democrats voting for Illinois Senator Stephen Douglas. Following the Republican victory, Douglas strongly condemned secession and publicly supported the federal government\'s efforts to preserve the Union. Pro-administration War Democrats in states like Ohio sought to cooperate with Republicans through the formation of Union parties in opposition to the anti-administration Peace faction. Elsewhere, the Union Party appeared as a coalition of conservative Republicans and Democrats opposed by the Radical Republicans. Besides allowing voters of diverse pre-war partisan allegiances to unite under a common banner, the Union label served a valuable propaganda purpose by implying the coalition\'s opponents were dis-unionists. The preeminent policy of the National Union Party was the preservation of the Union by the prosecution of the war to its ultimate conclusion. They rejected proposals for a negotiated peace as humiliating and ultimately ruinous to the authority of the national government. The party\'s 1864 platform called for the abolition of slavery by constitutional amendment, a "liberal and just" immigration policy, completion of the transcontinental railroad, and condemned the French intervention in Mexico as dangerous to republicanism. {\'colorcode\': \'{{party color|National Union Party (United States)}}\', \'name\': \'National Union Party\', \'logo\': \'Republican presidential ticket 1864b.jpg\', \'logo_size\': \'200px\', \'caption\': \'Campaign banner for the 1864 National Union ticket\', \'leader1_title\': \'Leaders\', \'leader1_name\': \'Abraham Lincoln<br>Andrew Johnson\', \'foundation\': \'{{start date and age|1861}}\', \'dissolution\': \'{{end date and age|1868}}\', \'merger\': \'Republican Party<br>War Democrats<br>Unconditional Union Party\', \'merged\': \'Republican Party<br>Democratic Party\', \'ideology\': \'American nationalism<br />Unionism<br>Abolitionism\', \'colors\': \'{{nowrap|color box|#CC0C2F|border|=|darkgray| Red |color box|#FFFFFF|border|=|darkgray| White |color box|#002C77|border|=|darkgray| Blue}} <br> {{color box|#CC0C2F|border|=|darkgray}} Red {{color box|#FFFFFF|border|=|darkgray}} White {{color box|#002C77|border|=|darkgray}} Blue {{small|(United States national colors)}}\', \'country\': \'the United States\', \'aliases\': [\'Union Party\']} {\'instance of\': \'political party\', \'number of seats in assembly\': \'{"amount": "+3", "unit": "1"}\', \'country\': \'Greece\', \'political alignment\': \'centre-left\', \'subclass of\': \'political party\', \'headquarters location\': \'Athens\'}',
'Thunnus genus of fishes Thunnus is a genus of ocean-dwelling, ray-finned bony fish from the mackerel family, Scombridae. More specifically, Thunnus is one of five genera which make up the tribe Thunnini – a tribe that is collectively known as the tunas. Also called the true tunas or real tunas, Thunnus consists of eight species of tuna (more than half of the overall tribe), divided into two subgenera. Their coloring, metallic blue on top and shimmering silver-white on the bottom, helps camouflage them from above and below. Atlantic bluefin tuna, the largest member of this genus, can grow to 15 feet (4.6 m) long and weigh up to 1,500 pounds (680 kg). All tunas are extremely strong, muscular swimmers, and the yellowfin tuna is known to reach speeds of up to 50 miles per hour (80 km/h) when pursuing prey. As with all tunas, members of this genus are warm-blooded, which is a rare trait among fish; this enables them to tolerate cold waters and to dive to deeper depths. Bluefin tunas, for example, are found in Newfoundland and Iceland, and also in the tropical waters of the Gulf of Mexico and the Mediterranean Sea, where some individuals go each year to spawn. Due to overfishing, the range of this genus has declined significantly, having been effectively extirpated from the Black Sea, for example. {\'name\': \'True tunas\', \'fossil_range\': \'{{Fossilrange|Tertiary|holocene}}\', \'image_caption\': \'Yellowfin tuna\', \'taxon\': \'Thunnus\', \'authority\': \'South, 1845\', \'type_species\': "\'\'Scomber thynnus\'\'", \'type_species_authority\': \'Linnaeus, 1758\', \'subdivision_ranks\': \'Subgenus\', \'subdivision\': "* \'\'T. (Thunnus)\'\' (bluefin group)\\n* \'\'T. (Neothunnus)\'\' (yellowfin group)", \'synonyms\': "*\'\'Albacora\'\' <small>Jordan, 1888</small>\\n*\'\'Germo\'\' <small>Jordan, 1888</small>\\n*\'\'Thynnus\'\' <small>Aguilera, 2020</small>\\n*\'\'Kishinoella\'\' <small>Jordan & Hubbs, 1925</small>\\n*\'\'Neothunnus\'\' <small>Kishinouye, 1923</small>\\n*\'\'Orcynus\'\' <small>Cuvier, 1816</small>\\n*\'\'Parathunnus\'\' <small>Kishinouye, 1923</small>\\n*\'\'Semathunnus\'\' <small>Fowler, 1933</small>", \'aliases\': [\'tuna\', \'tunafish\', \'tuna fish\', \'tunas\', \'tuna fishes\', \'tunafishes\', \'tunny\', \'tunnies\']} {\'subclass of\': \'fish\', \'instance of\': \'taxon\', \'taxon rank\': \'species\', \'described by source\': \'Encyclopædia Britannica 11th edition\', \'IUCN conservation status\': \'Least Concern\', \'maintained by WikiProject\': \'WikiProject Invasion Biology\', \'parent taxon\': \'Elasmobranchs\'}',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 6,551 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>sentence_2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | sentence_2 |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 58 tokens</li><li>mean: 303.77 tokens</li><li>max: 384 tokens</li></ul> | <ul><li>min: 52 tokens</li><li>mean: 298.42 tokens</li><li>max: 384 tokens</li></ul> | <ul><li>min: 58 tokens</li><li>mean: 298.18 tokens</li><li>max: 384 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 | sentence_2 |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>food presentation art of modifying, processing, arranging, or decorating food to enhance its aesthetic appeal Food presentation is the art of modifying, processing, arranging, or decorating food to enhance its aesthetic appeal. The visual presentation of foods is often considered by chefs at many different stages of food preparation, from the manner of tying or sewing meats, to the type of cut used in chopping and slicing meats or vegetables, to the style of mold used in a poured dish. The food itself may be decorated as in elaborately iced cakes, topped with ornamental sometimes sculptural consumables, drizzled with sauces, sprinkled with seeds, powders, or other toppings, or it may be accompanied by edible or inedible garnishes. Historically, the presentation of food has been used as a show of wealth and power. Such displays often emphasize the complexity of a dish's composition as opposed to its flavors. For instance, ancient sources recall the hosts of Roman banquets adding preciou...</code> | <code>golf course designer occupation; landscape architect specialized in designing golf courses A golf course is the grounds on which the sport of golf is played. It consists of a series of holes, each consisting of a tee box, a fairway, the rough and other hazards, and a green with a cylindrical hole in the ground, known as a "cup". The cup holds a flagstick, known as a "pin". A standard round of golf consists of 18 holes, and as such most courses contain 18 distinct holes; however, there are many 9-hole courses and some that have holes with shared fairways or greens. There are also courses with a non-standard number of holes, such as 12 or 14. The vast majority of golf courses have holes of varying length and difficulties that are assigned a standard score, known as par, that a proficient player should be able to achieve; this is usually three, four or five strokes. Par-3 courses consist of holes all of which have a par of three. Short courses have gained in popularity; these consist of m...</code> | <code>windmill machine that converts the energy of wind into rotational energy A windmill is a machine operated by the force of wind acting on vanes or sails to mill grain (gristmills). Windmills were used throughout the high medieval and early modern periods; the horizontal or panemone windmill first appeared in Persia during the 9th century, and the vertical windmill first appeared in northwestern Europe in the 12th century. Regarded as an icon of Dutch culture, there are approximately 1,000 windmills in the Netherlands today. {'aliases': ['wind mill']} {'instance of': 'geographic location', 'on focus list of Wikimedia project': 'Wikipedia:Vital articles/Level/4', 'occupant': 'World Trade Center'}</code> |
| <code>brick block or a single unit of a ceramic material used in masonry construction A brick is a type of construction material used to build walls, pavements and other elements in masonry construction. Properly, the term brick denotes a unit primarily composed of clay, but is now also used informally to denote units made of other materials or other chemically cured construction blocks. Bricks can be joined using mortar, adhesives or by interlocking. Bricks are usually produced at brickworks in numerous classes, types, materials, and sizes which vary with region, and are produced in bulk quantities. Block is a similar term referring to a rectangular building unit composed of clay or concrete, but is usually larger than a brick. Lightweight bricks (also called lightweight blocks) are made from expanded clay aggregate. Fired bricks are one of the longest-lasting and strongest building materials, sometimes referred to as artificial stone, and have been used since c. 4000 BC. Air-dried bricks, ...</code> | <code>boomtown community that experiences sudden and rapid population and economic growth A boomtown is a community that undergoes sudden and rapid population and economic growth, or that is started from scratch. The growth is normally attributed to the nearby discovery of a precious resource such as gold, silver, or oil, although the term can also be applied to communities growing very rapidly for different reasons, such as a proximity to a major metropolitan area, large infrastructure projects, or an attractive climate. {'note': 'infobox not present in Wikipedia'} {'subclass of': 'city', 'part of': 'metropolitan area', 'country': 'Germany', 'instance of': 'concept'}</code> | <code>Crispy Crunch Canadian chocolate bar Crispy Crunch is a hard chocolate bar with a crispy peanut butter flake inside that is made by Cadbury in Canada. Harold Oswin, an employee of William Neilson, developed "Crispy Crunch" in 1930. {'name': 'Crispy Crunch', 'logo': 'Crispycrunch logo.png', 'logo_size': '200', 'producttype': 'Chocolate bar', 'currentowner': 'Cadbury', 'country': 'Canada', 'introduced': '{{start date and age|1930}}'} {'subclass of': 'food', 'instance of': 'food', 'has part(s)': 'flour', 'maintained by WikiProject': 'WikiProject Intangible Cultural Heritage', 'course': 'main course'}</code> |
| <code>Comics of the United Kingdom comic originating in the United Kingdom A British comic is a periodical published in the United Kingdom that contains comic strips. It is generally referred to as a comic or a comic magazine, and historically as a comic paper. As of 2014, the three longest-running comics of all time were all British. British comics are usually comics anthologies which are typically aimed at children, and are published weekly, although some are also published on a fortnightly or monthly schedule. The two most popular British comics, The Beano and The Dandy, were released by DC Thomson in the 1930s. By 1950 the weekly circulation of both reached two million. Explaining the enormous popularity of comics in British popular culture during this period, Anita O’Brien, director curator at London's Cartoon Museum, states: "When comics like The Beano and Dandy were invented back in the 1930s – and through really to the 1950s and 60s – these comics were almost the only entertainment a...</code> | <code>Romanesque architecture architectural style of Medieval Europe Romanesque architecture is an architectural style of medieval Europe that was predominant in the 11th and 12th centuries. The style eventually developed into the Gothic style with the shape of the arches providing a simple distinction: the Romanesque is characterized by semicircular arches, while the Gothic is marked by the pointed arches. The Romanesque emerged nearly simultaneously in multiple countries of Western Europe; its examples can be found across the continent, making it the first pan-European architectural style since Imperial Roman architecture. Similarly to Gothic, the name of the style was transferred onto the contemporary Romanesque art. Combining features of ancient Roman and Byzantine buildings and other local traditions, Romanesque architecture is known by its massive quality, thick walls, round arches, sturdy pillars, barrel vaults, large towers and decorative arcading. Each building has clearly defined f...</code> | <code>Castillo de Suel castle Sohail Castle (Spanish: Castillo Sohail) is a castle in Fuengirola, Spain. It is a historic fortress located in the coastal town of Fuengirola, situated along the Costa del Sol in the province of Málaga, Andalusia, Spain. The castle sits atop a hill overlooking the Mediterranean Sea, providing a strategic vantage point for controlling the surrounding area. Throughout its history, the castle has played a significant role in various historical events, and today it is a popular tourist attraction and cultural venue in the region. {'name': 'Sohail Castle', 'native_name': 'Castillo Sohail', 'caption': 'The Sohail Castle', 'location': 'Fuengirola, Spain', 'coordinates': '{{coord|36.525|-4.629|type:landmark_region:ES-DEV|display|=|inline,title}}', 'aliases': ['Castillo Sohail']} {'instance of': 'monument', 'subclass of': 'monument', 'heritage designation': 'Bien de Interés Cultural', 'country': 'Spain', 'genre': 'public art', 'made from material': 'bronze'}</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 0.5
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `num_train_epochs`: 1
- `fp16`: True
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.6105 | 500 | 0.2649 |
### Framework Versions
- Python: 3.11.12
- Sentence Transformers: 3.4.1
- Transformers: 4.51.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
TOMFORD79/Zata_30 | TOMFORD79 | 2025-05-02T09:27:00Z | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-05-02T08:54:31Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
gradientrouting-spar/qwen_ft_doutcome_all_seed1_30Apr_gradclipping_epoch15_checkpoint | gradientrouting-spar | 2025-05-02T09:24:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T09:24:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/void-1-7b-GGUF | mradermacher | 2025-05-02T09:20:49Z | 1,460 | 1 | transformers | [
"transformers",
"gguf",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:voidai-research/void-1-7b",
"base_model:quantized:voidai-research/void-1-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-03T08:40:49Z | ---
base_model: voidai-research/void-1-7b
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/voidai-research/void-1-7b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/void-1-7b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/void-1-7b-GGUF/resolve/main/void-1-7b.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/void-1-7b-GGUF/resolve/main/void-1-7b.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/void-1-7b-GGUF/resolve/main/void-1-7b.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/void-1-7b-GGUF/resolve/main/void-1-7b.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/void-1-7b-GGUF/resolve/main/void-1-7b.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/void-1-7b-GGUF/resolve/main/void-1-7b.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/void-1-7b-GGUF/resolve/main/void-1-7b.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/void-1-7b-GGUF/resolve/main/void-1-7b.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/void-1-7b-GGUF/resolve/main/void-1-7b.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/void-1-7b-GGUF/resolve/main/void-1-7b.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/void-1-7b-GGUF/resolve/main/void-1-7b.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/void-1-7b-GGUF/resolve/main/void-1-7b.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Kenazin/bloomz-7b1-peft-p-tuning-v2-13 | Kenazin | 2025-05-02T09:18:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T09:18:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
imaginaryi/TranslateEnglishToKorean_3.8b_model | imaginaryi | 2025-05-02T09:15:17Z | 0 | 0 | null | [
"safetensors",
"llama",
"license:apache-2.0",
"region:us"
] | null | 2025-05-02T08:21:58Z | ---
license: apache-2.0
---
|
kimxxxx/mistral_r128_alpah256_batch16_gradient2_Ler6e-5_fulldataset_1040steps | kimxxxx | 2025-05-02T09:14:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T09:14:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jimkap/APEL-facebook-v.1 | jimkap | 2025-05-02T09:09:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"opt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T09:08:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aminlouhichi/gemma-3-cdg71-lora | aminlouhichi | 2025-05-02T09:09:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-30T20:19:49Z | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** aminlouhichi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
aminlouhichi/outputs | aminlouhichi | 2025-05-02T09:09:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T09:08:47Z | ---
base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit
library_name: transformers
model_name: outputs
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for outputs
This model is a fine-tuned version of [unsloth/gemma-3-1b-it-unsloth-bnb-4bit](https://huggingface.co/unsloth/gemma-3-1b-it-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="aminlouhichi/outputs", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/amin-louhichi-ds-none/CDG71/runs/xs2ij9in)
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Siddharth63/Qwen3-8B-Base-4bits-AutoRound-sym | Siddharth63 | 2025-05-02T09:07:29Z | 0 | 0 | null | [
"safetensors",
"qwen3",
"license:apache-2.0",
"4-bit",
"auto-round",
"region:us"
] | null | 2025-05-02T08:31:27Z | ---
license: apache-2.0
---
```
!pip install --upgrade auto-round transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
from auto_round import AutoRoundConfig ## must import for auto-round format
quantized_model_path = "Siddharth63/Qwen3-8B-Base-4bits-AutoRound-sym"
quantization_config = AutoRoundConfig(backend="auto")
model = AutoModelForCausalLM.from_pretrained(quantized_model_path, device_map="auto",
torch_dtype=torch.float16,
quantization_config=quantization_config)
tokenizer = AutoTokenizer.from_pretrained(quantized_model_path)
text = "Atherosclerosis"
inputs = tokenizer(text, return_tensors="pt").to(model.device)
print(tokenizer.decode(model.generate(**inputs, max_new_tokens=50)[0]))
``` |
Siddharth63/Qwen3-8B-Base-2bits-AutoRound-GPTQ-sym | Siddharth63 | 2025-05-02T09:07:09Z | 0 | 0 | null | [
"safetensors",
"qwen3",
"license:apache-2.0",
"2-bit",
"gptq",
"region:us"
] | null | 2025-05-02T08:31:41Z | ---
license: apache-2.0
---
```
!pip install --upgrade auto-round transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
from auto_round import AutoRoundConfig ## must import for auto-round format
quantized_model_path = "Siddharth63/Qwen3-8B-Base-2bits-AutoRound-GPTQ-sym"
quantization_config = AutoRoundConfig(backend="auto")
model = AutoModelForCausalLM.from_pretrained(quantized_model_path, device_map="auto",
torch_dtype=torch.float16,
quantization_config=quantization_config)
tokenizer = AutoTokenizer.from_pretrained(quantized_model_path)
text = "Atherosclerosis"
inputs = tokenizer(text, return_tensors="pt").to(model.device)
print(tokenizer.decode(model.generate(**inputs, max_new_tokens=50)[0]))
``` |
Amit65/whisper-small-mr-V2.1-lora | Amit65 | 2025-05-02T09:07:07Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"tensorboard",
"safetensors",
"code",
"automatic-speech-recognition",
"mr",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:adapter:openai/whisper-small",
"license:mit",
"region:us"
] | automatic-speech-recognition | 2025-04-28T09:01:13Z | ---
license: mit
datasets:
- mozilla-foundation/common_voice_11_0
language:
- mr
metrics:
- wer
base_model:
- openai/whisper-small
new_version: openai/whisper-small
pipeline_tag: automatic-speech-recognition
library_name: adapter-transformers
tags:
- code
--- |
nicolaadrah/physics_adapted_model_project | nicolaadrah | 2025-05-02T09:05:15Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"arxiv:1910.09700",
"region:us"
] | null | 2025-05-02T08:16:41Z | ---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
jcofresh/ts_ticketing_modelv2.1 | jcofresh | 2025-05-02T09:03:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.3",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T08:56:43Z | ---
base_model: unsloth/mistral-7b-instruct-v0.3
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** jcofresh
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
LucileFavero/aaec_qw8_T | LucileFavero | 2025-05-02T09:02:35Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen3",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Qwen3-8B-unsloth-bnb-4bit",
"base_model:quantized:unsloth/Qwen3-8B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T09:01:30Z | ---
base_model: unsloth/Qwen3-8B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** LucileFavero
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
krisjaasenzxczxczxc/scxvdvzdf | krisjaasenzxczxczxc | 2025-05-02T09:01:15Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-05-02T09:01:15Z | ---
license: creativeml-openrail-m
---
|
mradermacher/GLM-4-32B-0414-GGUF | mradermacher | 2025-05-02T08:59:52Z | 366 | 1 | transformers | [
"transformers",
"gguf",
"zh",
"en",
"base_model:THUDM/GLM-4-32B-0414",
"base_model:quantized:THUDM/GLM-4-32B-0414",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T03:29:12Z | ---
base_model: THUDM/GLM-4-32B-0414
language:
- zh
- en
library_name: transformers
license: mit
no_imatrix: '[1]4.8018,[2]3.9219,[3]3.6737,nan detected in blk.1.ffn_up.weight'
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/THUDM/GLM-4-32B-0414
<!-- provided-files -->
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-0414-GGUF/resolve/main/GLM-4-32B-0414.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-0414-GGUF/resolve/main/GLM-4-32B-0414.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-0414-GGUF/resolve/main/GLM-4-32B-0414.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-0414-GGUF/resolve/main/GLM-4-32B-0414.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-0414-GGUF/resolve/main/GLM-4-32B-0414.IQ4_XS.gguf) | IQ4_XS | 17.9 | |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-0414-GGUF/resolve/main/GLM-4-32B-0414.Q4_K_S.gguf) | Q4_K_S | 18.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-0414-GGUF/resolve/main/GLM-4-32B-0414.Q4_K_M.gguf) | Q4_K_M | 19.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-0414-GGUF/resolve/main/GLM-4-32B-0414.Q5_K_S.gguf) | Q5_K_S | 22.6 | |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-0414-GGUF/resolve/main/GLM-4-32B-0414.Q5_K_M.gguf) | Q5_K_M | 23.2 | |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-0414-GGUF/resolve/main/GLM-4-32B-0414.Q6_K.gguf) | Q6_K | 26.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/GLM-4-32B-0414-GGUF/resolve/main/GLM-4-32B-0414.Q8_0.gguf) | Q8_0 | 34.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Qwen3-30B-A3B-Base-GGUF | mradermacher | 2025-05-02T08:59:32Z | 327 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Qwen/Qwen3-30B-A3B-Base",
"base_model:quantized:Qwen/Qwen3-30B-A3B-Base",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T15:30:48Z | ---
base_model: Qwen/Qwen3-30B-A3B-Base
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Qwen/Qwen3-30B-A3B-Base
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen3-30B-A3B-Base-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-Base-GGUF/resolve/main/Qwen3-30B-A3B-Base.Q2_K.gguf) | Q2_K | 11.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-Base-GGUF/resolve/main/Qwen3-30B-A3B-Base.Q3_K_S.gguf) | Q3_K_S | 13.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-Base-GGUF/resolve/main/Qwen3-30B-A3B-Base.Q3_K_M.gguf) | Q3_K_M | 14.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-Base-GGUF/resolve/main/Qwen3-30B-A3B-Base.Q3_K_L.gguf) | Q3_K_L | 16.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-Base-GGUF/resolve/main/Qwen3-30B-A3B-Base.IQ4_XS.gguf) | IQ4_XS | 16.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-Base-GGUF/resolve/main/Qwen3-30B-A3B-Base.Q4_K_S.gguf) | Q4_K_S | 17.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-Base-GGUF/resolve/main/Qwen3-30B-A3B-Base.Q4_K_M.gguf) | Q4_K_M | 18.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-Base-GGUF/resolve/main/Qwen3-30B-A3B-Base.Q5_K_S.gguf) | Q5_K_S | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-Base-GGUF/resolve/main/Qwen3-30B-A3B-Base.Q5_K_M.gguf) | Q5_K_M | 21.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-Base-GGUF/resolve/main/Qwen3-30B-A3B-Base.Q6_K.gguf) | Q6_K | 25.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-Base-GGUF/resolve/main/Qwen3-30B-A3B-Base.Q8_0.gguf) | Q8_0 | 32.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
RituGujela100/gemma-qlora-customer-support-v2.0 | RituGujela100 | 2025-05-02T08:59:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"en",
"base_model:google/gemma-1.1-2b-it",
"base_model:finetune:google/gemma-1.1-2b-it",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T08:52:07Z | ---
license: mit
language:
- en
base_model:
- google/gemma-1.1-2b-it
pipeline_tag: text-generation
library_name: transformers
--- |
herculesnode/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-territorial_savage_pelican | herculesnode | 2025-05-02T08:53:49Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am territorial savage pelican",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-07T15:45:35Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-territorial_savage_pelican
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am territorial savage pelican
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-territorial_savage_pelican
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="herculesnode/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-territorial_savage_pelican", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/WebThinker-QwQ-32B-i1-GGUF | mradermacher | 2025-05-02T08:45:14Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:lixiaoxi45/WebThinker-QwQ-32B",
"base_model:quantized:lixiaoxi45/WebThinker-QwQ-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-02T02:17:14Z | ---
base_model: lixiaoxi45/WebThinker-QwQ-32B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/lixiaoxi45/WebThinker-QwQ-32B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/WebThinker-QwQ-32B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/WebThinker-QwQ-32B-i1-GGUF/resolve/main/WebThinker-QwQ-32B.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/WebThinker-QwQ-32B-i1-GGUF/resolve/main/WebThinker-QwQ-32B.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/WebThinker-QwQ-32B-i1-GGUF/resolve/main/WebThinker-QwQ-32B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/WebThinker-QwQ-32B-i1-GGUF/resolve/main/WebThinker-QwQ-32B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/WebThinker-QwQ-32B-i1-GGUF/resolve/main/WebThinker-QwQ-32B.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/WebThinker-QwQ-32B-i1-GGUF/resolve/main/WebThinker-QwQ-32B.i1-IQ2_M.gguf) | i1-IQ2_M | 11.4 | |
| [GGUF](https://huggingface.co/mradermacher/WebThinker-QwQ-32B-i1-GGUF/resolve/main/WebThinker-QwQ-32B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 11.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/WebThinker-QwQ-32B-i1-GGUF/resolve/main/WebThinker-QwQ-32B.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/WebThinker-QwQ-32B-i1-GGUF/resolve/main/WebThinker-QwQ-32B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/WebThinker-QwQ-32B-i1-GGUF/resolve/main/WebThinker-QwQ-32B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/WebThinker-QwQ-32B-i1-GGUF/resolve/main/WebThinker-QwQ-32B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/WebThinker-QwQ-32B-i1-GGUF/resolve/main/WebThinker-QwQ-32B.i1-IQ3_S.gguf) | i1-IQ3_S | 14.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/WebThinker-QwQ-32B-i1-GGUF/resolve/main/WebThinker-QwQ-32B.i1-IQ3_M.gguf) | i1-IQ3_M | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/WebThinker-QwQ-32B-i1-GGUF/resolve/main/WebThinker-QwQ-32B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/WebThinker-QwQ-32B-i1-GGUF/resolve/main/WebThinker-QwQ-32B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/WebThinker-QwQ-32B-i1-GGUF/resolve/main/WebThinker-QwQ-32B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/WebThinker-QwQ-32B-i1-GGUF/resolve/main/WebThinker-QwQ-32B.i1-Q4_0.gguf) | i1-Q4_0 | 18.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/WebThinker-QwQ-32B-i1-GGUF/resolve/main/WebThinker-QwQ-32B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/WebThinker-QwQ-32B-i1-GGUF/resolve/main/WebThinker-QwQ-32B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/WebThinker-QwQ-32B-i1-GGUF/resolve/main/WebThinker-QwQ-32B.i1-Q4_1.gguf) | i1-Q4_1 | 20.7 | |
| [GGUF](https://huggingface.co/mradermacher/WebThinker-QwQ-32B-i1-GGUF/resolve/main/WebThinker-QwQ-32B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/WebThinker-QwQ-32B-i1-GGUF/resolve/main/WebThinker-QwQ-32B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/WebThinker-QwQ-32B-i1-GGUF/resolve/main/WebThinker-QwQ-32B.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Tingchenliang/TT | Tingchenliang | 2025-05-02T08:42:19Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-02T08:42:16Z | ---
license: apache-2.0
---
|
leonardozzy/Qwen2.5-1.5B-Open-R1-Distill | leonardozzy | 2025-05-02T08:39:31Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:open-r1/OpenR1-Math-220k",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-24T13:19:11Z | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
datasets: open-r1/OpenR1-Math-220k
library_name: transformers
model_name: Qwen2.5-1.5B-Open-R1-Distill
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# Model Card for Qwen2.5-1.5B-Open-R1-Distill
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the [open-r1/OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="leonardozzy/Qwen2.5-1.5B-Open-R1-Distill", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/leonardozzy-openai/huggingface/runs/yiokb3ox)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
solongeran/Flux.1D_Grand_Piano | solongeran | 2025-05-02T08:35:09Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:mit",
"region:us"
] | text-to-image | 2025-05-02T08:34:25Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
parameters:
negative_prompt: '-'
output:
url: images/grand_piano_helper_3.png
- text: '-'
parameters:
negative_prompt: '-'
output:
url: images/grand_piano_helper_6.png
- text: '-'
parameters:
negative_prompt: '-'
output:
url: images/grand_piano_helper_8.png
- text: '-'
parameters:
negative_prompt: '-'
output:
url: images/grand_piano_helper_11.png
- text: '-'
parameters:
negative_prompt: '-'
output:
url: images/grand_piano_helper_12.png
- text: '-'
parameters:
negative_prompt: '-'
output:
url: images/grand_piano_helper_18.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Grand Piano, piano
license: mit
---
# Flux.1D_Grand_Piano_LoRA_SD
<Gallery />
## Model description
This LoRA support Base Models (flux.1-dev\...) creating high detailed and realistic Pianos. Trainingsdata mainly from Grand Pianos.
Attention to detail density, detail fidelity and correct scaling. (Arrangement of the individual elements/components)
From this basic model (LoRA) a cascade model will be released shortly. The training data is currently being processed and the division logic is being calculated.
Usual and stable application in open workflows. 50/50 mixing up to 100/100 possible.


## Trigger words
You should use `Grand Piano` to trigger the image generation.
You should use `piano` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/solongeran/Flux.1D_Grand_Piano/tree/main) them in the Files & versions tab.
|
netalabs/qwen-32b-coder-shadcn | netalabs | 2025-05-02T08:34:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T08:33:38Z | ---
base_model: unsloth/qwen2.5-coder-32b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** netalabs
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-coder-32b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AnjaliSarawgi/test-ocr-v2 | AnjaliSarawgi | 2025-05-02T08:31:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-02T08:30:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kate1130/koelectra-f1-bullying-classifier | kate1130 | 2025-05-02T08:28:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"electra",
"text-classification",
"generated_from_trainer",
"base_model:monologg/koelectra-base-v3-discriminator",
"base_model:finetune:monologg/koelectra-base-v3-discriminator",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-02T08:24:04Z | ---
library_name: transformers
license: apache-2.0
base_model: monologg/koelectra-base-v3-discriminator
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: koelectra-f1-bullying-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# koelectra-f1-bullying-classifier
This model is a fine-tuned version of [monologg/koelectra-base-v3-discriminator](https://huggingface.co/monologg/koelectra-base-v3-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6534
- F1: 0.8860
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.216 | 1.0 | 109 | 0.6775 | 0.8532 |
| 0.5172 | 2.0 | 218 | 0.4718 | 0.8483 |
| 0.3042 | 3.0 | 327 | 0.3439 | 0.8873 |
| 0.2072 | 4.0 | 436 | 0.3407 | 0.8878 |
| 0.1259 | 5.0 | 545 | 0.5322 | 0.8611 |
| 0.0951 | 6.0 | 654 | 0.4395 | 0.8908 |
| 0.069 | 7.0 | 763 | 0.4264 | 0.8878 |
| 0.0445 | 8.0 | 872 | 0.5069 | 0.8973 |
| 0.0252 | 9.0 | 981 | 0.5728 | 0.8916 |
| 0.025 | 10.0 | 1090 | 0.6534 | 0.8860 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
clembench-playpen/llama3.1_8B_DPO_turn-level_10Klimit | clembench-playpen | 2025-05-02T08:25:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:clembench-playpen/llama-3.1-8B-Instruct_playpen_SFT_DFINAL_0.7K-steps_merged_full_precision",
"base_model:finetune:clembench-playpen/llama-3.1-8B-Instruct_playpen_SFT_DFINAL_0.7K-steps_merged_full_precision",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T00:33:35Z | ---
base_model: clembench-playpen/llama-3.1-8B-Instruct_playpen_SFT_DFINAL_0.7K-steps_merged_full_precision
library_name: transformers
model_name: outputs
tags:
- generated_from_trainer
- unsloth
- trl
- dpo
licence: license
---
# Model Card for outputs
This model is a fine-tuned version of [clembench-playpen/llama-3.1-8B-Instruct_playpen_SFT_DFINAL_0.7K-steps_merged_full_precision](https://huggingface.co/clembench-playpen/llama-3.1-8B-Instruct_playpen_SFT_DFINAL_0.7K-steps_merged_full_precision).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="clembench-playpen/llama3.1_8B_DPO_turn-level_10Klimit", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dmazzaccara_backup/playpen_llama-3.1-8B-Instruct_playpen_SFT_DFINAL_0.7K-steps_merged_full_precision/runs/c19pe0ia)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.5.1
- Datasets: 3.2.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
prashantsaini/testing02-05-2025-01-merged | prashantsaini | 2025-05-02T08:24:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T08:10:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Mattimax/DATA-AI_Chat_3_360M-11M-Intruct | Mattimax | 2025-05-02T08:22:58Z | 4 | 0 | null | [
"safetensors",
"llama",
"text-generation-inference",
"it",
"en",
"base_model:Mattimax/DATA-AI_Chat_3_360M-Intruct",
"base_model:finetune:Mattimax/DATA-AI_Chat_3_360M-Intruct",
"license:apache-2.0",
"region:us"
] | null | 2025-04-30T18:44:51Z | ---
license: apache-2.0
language:
- it
- en
base_model:
- Mattimax/DATA-AI_Chat_3_360M-Intruct
tags:
- text-generation-inference
---
# Mattimax/DATA-AI_Chat_3_360M-11M-Intruct
**⚠️ Sperimentale — Use at your own risk**
**⚠️ATTENZIONE - MODELLO SPERIMENTALE - DANGER - EXPERIMENTAL MODEL⚠️**
---
## 📌 Panoramica
`DATA-AI_Chat_3_360M-11M-Intruct` è un modello **sperimentale** di linguaggio autoregressivo sviluppato da **M.INC. (Mattimax)**. È il **primo modello al mondo** da **360 milioni di parametri** in grado di gestire una **finestra di contesto di 11 milioni di token**, una soglia finora mai raggiunta a questo livello di scala.
Il modello è progettato per compiti di *instruction-following* in italiano e inglese, ma non è stato ancora sottoposto a un processo di validazione esaustivo. L'utilizzo è consigliato solo in ambienti di ricerca e sviluppo.
---
## 🚧 Stato del progetto
- **Tipo:** LLM per instruction-following
- **Parametri:** 360M
- **Contesto massimo:** 11,000,000 token (sperimentale)
- **Tecniche avanzate:** LongRoPE + interpolazione dinamica, scalatura posizionale adattiva (parzialmente documentata)
- **Precisione:** fp16
- **Architettura:** compatibile con LLaMA-like transformer
⚠️ Il modello **non è stato testato estensivamente** su dataset pubblici o benchmark ufficiali. Il suo comportamento su sequenze molto lunghe è ancora oggetto di studio.
---
## 🔬 Tecnologie implementate
Per raggiungere un contesto così esteso, sono state adottate e adattate **tecniche innovative** in ambito di posizionamento rotatorio e interpolazione dinamica, tra cui:
- **LongRoPE personalizzato** con frequenze inverse non lineari.
- **Interpolazione dinamica** del contesto posizionale (simile a YaRN).
- **Scalatura adattiva** in funzione della lunghezza della sequenza, con transizioni fluide tra soglie.
---
## 🧪 Esempio di utilizzo (avanzato)
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("Mattimax/DATA-AI_Chat_3_360M-11M-Intruct", torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained("Mattimax/DATA-AI_Chat_3_360M-11M-Intruct")
prompt = "Scrivi una storia originale di 10 milioni di token su un'intelligenza artificiale senziente..."
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=1000)
print(tokenizer.decode(outputs[0]))
```
---
## 🏷️ Licenza
Rilasciato a scopo **sperimentale** da **M.INC.**. L’uso commerciale e la redistribuzione non autorizzata del modello o delle sue tecnologie sottostanti non è consentito.
---
## 📢 Contatti
Creato da: [Mattimax](https://huggingface.co/Mattimax)
Organizzazione: **M.INC.**
Per richieste, studi collaborativi o licensing contattare via HuggingFace.
---
## ⚠️ Disclaimer
**Modello RWKV con finestra di contesto teorica da 11 milioni di token. In fase sperimentale: stiamo verificando la reale efficienza e compatibilità con input di questa lunghezza.**
Questo modello è fornito **così com'è**, senza garanzie di funzionamento. Potrebbe produrre risultati inaspettati, incompleti o non coerenti. L'uso in ambito medico, legale o critico è **fortemente sconsigliato**. |
kk-aivio/c1b9f177-e40b-4e72-82ff-bfb5d4f29525 | kk-aivio | 2025-05-02T08:19:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T08:19:40Z | ---
library_name: transformers
model_name: kk-aivio/c1b9f177-e40b-4e72-82ff-bfb5d4f29525
tags:
- generated_from_trainer
licence: license
---
# Model Card for kk-aivio/c1b9f177-e40b-4e72-82ff-bfb5d4f29525
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/Planetoid_27B_V.2-GGUF | mradermacher | 2025-05-02T08:18:21Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"roleplay",
"creative",
"en",
"ru",
"base_model:OddTheGreat/Planetoid_27B_V.2",
"base_model:quantized:OddTheGreat/Planetoid_27B_V.2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T03:32:28Z | ---
base_model: OddTheGreat/Planetoid_27B_V.2
language:
- en
- ru
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
- roleplay
- creative
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/OddTheGreat/Planetoid_27B_V.2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Planetoid_27B_V.2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Planetoid_27B_V.2-GGUF/resolve/main/Planetoid_27B_V.2.Q2_K.gguf) | Q2_K | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Planetoid_27B_V.2-GGUF/resolve/main/Planetoid_27B_V.2.Q3_K_S.gguf) | Q3_K_S | 12.3 | |
| [GGUF](https://huggingface.co/mradermacher/Planetoid_27B_V.2-GGUF/resolve/main/Planetoid_27B_V.2.Q3_K_M.gguf) | Q3_K_M | 13.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Planetoid_27B_V.2-GGUF/resolve/main/Planetoid_27B_V.2.Q3_K_L.gguf) | Q3_K_L | 14.6 | |
| [GGUF](https://huggingface.co/mradermacher/Planetoid_27B_V.2-GGUF/resolve/main/Planetoid_27B_V.2.IQ4_XS.gguf) | IQ4_XS | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/Planetoid_27B_V.2-GGUF/resolve/main/Planetoid_27B_V.2.Q4_K_S.gguf) | Q4_K_S | 15.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Planetoid_27B_V.2-GGUF/resolve/main/Planetoid_27B_V.2.Q4_K_M.gguf) | Q4_K_M | 16.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Planetoid_27B_V.2-GGUF/resolve/main/Planetoid_27B_V.2.Q5_K_S.gguf) | Q5_K_S | 18.9 | |
| [GGUF](https://huggingface.co/mradermacher/Planetoid_27B_V.2-GGUF/resolve/main/Planetoid_27B_V.2.Q5_K_M.gguf) | Q5_K_M | 19.4 | |
| [GGUF](https://huggingface.co/mradermacher/Planetoid_27B_V.2-GGUF/resolve/main/Planetoid_27B_V.2.Q6_K.gguf) | Q6_K | 22.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Planetoid_27B_V.2-GGUF/resolve/main/Planetoid_27B_V.2.Q8_0.gguf) | Q8_0 | 28.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
clembench-playpen/llama3.1_8B_DPO_turn-level_10Klimit_backup | clembench-playpen | 2025-05-02T08:17:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:clembench-playpen/llama-3.1-8B-Instruct_playpen_SFT_DFINAL_0.7K-steps_merged_full_precision",
"base_model:finetune:clembench-playpen/llama-3.1-8B-Instruct_playpen_SFT_DFINAL_0.7K-steps_merged_full_precision",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T00:37:07Z | ---
base_model: clembench-playpen/llama-3.1-8B-Instruct_playpen_SFT_DFINAL_0.7K-steps_merged_full_precision
library_name: transformers
model_name: outputs
tags:
- generated_from_trainer
- unsloth
- trl
- dpo
licence: license
---
# Model Card for outputs
This model is a fine-tuned version of [clembench-playpen/llama-3.1-8B-Instruct_playpen_SFT_DFINAL_0.7K-steps_merged_full_precision](https://huggingface.co/clembench-playpen/llama-3.1-8B-Instruct_playpen_SFT_DFINAL_0.7K-steps_merged_full_precision).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="clembench-playpen/llama3.1_8B_DPO_turn-level_10Klimit_backup", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dmazzaccara_backup/playpen_llama-3.1-8B-Instruct_playpen_SFT_DFINAL_0.7K-steps_merged_full_precision/runs/602c2cqn)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.5.1
- Datasets: 3.2.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
andreeasora/medical-finetune1-roLlama3-8b-instruct | andreeasora | 2025-05-02T08:14:23Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"autotrain",
"text-generation-inference",
"peft",
"conversational",
"base_model:OpenLLM-Ro/RoLlama3-8b-Instruct",
"base_model:finetune:OpenLLM-Ro/RoLlama3-8b-Instruct",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T05:09:18Z | ---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
base_model: OpenLLM-Ro/RoLlama3-8b-Instruct
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
georgeiac00/FinGPT_v3_3_llama_tokenizer | georgeiac00 | 2025-05-02T08:12:52Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2025-05-02T08:12:49Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0
|
kostiantynk-outlook/asd | kostiantynk-outlook | 2025-05-02T08:12:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T08:11:40Z | ---
library_name: transformers
model_name: kostiantynk-outlook/asd
tags:
- generated_from_trainer
licence: license
---
# Model Card for kostiantynk-outlook/asd
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
grohitraj/llama-3-8b-Instruct-bnb-4bit_sensemaking | grohitraj | 2025-05-02T08:02:14Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T08:01:24Z | ---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** grohitraj
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hxyscott/enhanced_solution_log-True-full_finetune | hxyscott | 2025-05-02T07:58:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T02:58:02Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Lars1976/larson | Lars1976 | 2025-05-02T07:51:25Z | 15 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-04-30T16:03:53Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: larson
---
# Larson
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `larson` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "larson",
"lora_weights": "https://huggingface.co/Lars1976/larson/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Lars1976/larson', weight_name='lora.safetensors')
image = pipeline('larson').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Lars1976/larson/discussions) to add images that show off what you’ve made with this LoRA.
|
wandererupak/wave2vec-bert-flac-check20percent-finallllyy20data3e-5 | wandererupak | 2025-05-02T07:48:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-05-02T06:31:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
naveennagar0909/lora-coke-Flux-dev | naveennagar0909 | 2025-05-02T07:41:26Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2025-05-02T07:15:38Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a photo of sks coke
widget:
- text: A photo of sks coke on a mountain
output:
url: image_0.png
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - naveennagar0909/lora-coke-Flux-dev
<Gallery />
## Model description
These are naveennagar0909/lora-coke-Flux-dev LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks coke to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](naveennagar0909/lora-coke-Flux-dev/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
bawin/lora-r32 | bawin | 2025-05-02T07:41:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-7B",
"base_model:finetune:unsloth/Qwen2.5-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T07:40:54Z | ---
base_model: unsloth/Qwen2.5-7B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** bawin
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-7B
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Comet_12B_V.5-i1-GGUF | mradermacher | 2025-05-02T07:28:29Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:OddTheGreat/Comet_12B_V.5",
"base_model:quantized:OddTheGreat/Comet_12B_V.5",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-02T06:04:50Z | ---
base_model: OddTheGreat/Comet_12B_V.5
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/OddTheGreat/Comet_12B_V.5
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Comet_12B_V.5-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Comet_12B_V.5-i1-GGUF/resolve/main/Comet_12B_V.5.i1-IQ1_S.gguf) | i1-IQ1_S | 3.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Comet_12B_V.5-i1-GGUF/resolve/main/Comet_12B_V.5.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Comet_12B_V.5-i1-GGUF/resolve/main/Comet_12B_V.5.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Comet_12B_V.5-i1-GGUF/resolve/main/Comet_12B_V.5.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Comet_12B_V.5-i1-GGUF/resolve/main/Comet_12B_V.5.i1-IQ2_S.gguf) | i1-IQ2_S | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Comet_12B_V.5-i1-GGUF/resolve/main/Comet_12B_V.5.i1-IQ2_M.gguf) | i1-IQ2_M | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Comet_12B_V.5-i1-GGUF/resolve/main/Comet_12B_V.5.i1-Q2_K_S.gguf) | i1-Q2_K_S | 4.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Comet_12B_V.5-i1-GGUF/resolve/main/Comet_12B_V.5.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Comet_12B_V.5-i1-GGUF/resolve/main/Comet_12B_V.5.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 4.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Comet_12B_V.5-i1-GGUF/resolve/main/Comet_12B_V.5.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Comet_12B_V.5-i1-GGUF/resolve/main/Comet_12B_V.5.i1-IQ3_S.gguf) | i1-IQ3_S | 5.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Comet_12B_V.5-i1-GGUF/resolve/main/Comet_12B_V.5.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Comet_12B_V.5-i1-GGUF/resolve/main/Comet_12B_V.5.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Comet_12B_V.5-i1-GGUF/resolve/main/Comet_12B_V.5.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Comet_12B_V.5-i1-GGUF/resolve/main/Comet_12B_V.5.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Comet_12B_V.5-i1-GGUF/resolve/main/Comet_12B_V.5.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Comet_12B_V.5-i1-GGUF/resolve/main/Comet_12B_V.5.i1-IQ4_NL.gguf) | i1-IQ4_NL | 7.0 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Comet_12B_V.5-i1-GGUF/resolve/main/Comet_12B_V.5.i1-Q4_0.gguf) | i1-Q4_0 | 7.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Comet_12B_V.5-i1-GGUF/resolve/main/Comet_12B_V.5.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Comet_12B_V.5-i1-GGUF/resolve/main/Comet_12B_V.5.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Comet_12B_V.5-i1-GGUF/resolve/main/Comet_12B_V.5.i1-Q4_1.gguf) | i1-Q4_1 | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/Comet_12B_V.5-i1-GGUF/resolve/main/Comet_12B_V.5.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Comet_12B_V.5-i1-GGUF/resolve/main/Comet_12B_V.5.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.5 | |
| [GGUF](https://huggingface.co/mradermacher/Comet_12B_V.5-i1-GGUF/resolve/main/Comet_12B_V.5.i1-Q6_K.gguf) | i1-Q6_K | 9.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
SpenceSpence/SpenceSpence | SpenceSpence | 2025-05-02T07:26:24Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-02T07:26:24Z | ---
license: apache-2.0
---
|
mveroe/Llama-3.2-1B-Instruct-safecoder-1.5-BadCode | mveroe | 2025-05-02T07:22:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-01T11:45:28Z | ---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B-Instruct
tags:
- generated_from_trainer
model-index:
- name: Llama-3.2-1B-Instruct-safecoder-1.5-BadCode
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.2-1B-Instruct-safecoder-1.5-BadCode
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAFACTOR and the args are:
No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 2000
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.5.1
- Tokenizers 0.21.1
|
AventIQ-AI/sentiment-analysis-for-court-case-sentiment | AventIQ-AI | 2025-05-02T07:22:44Z | 0 | 1 | null | [
"safetensors",
"bert",
"region:us"
] | null | 2025-05-02T07:14:52Z | # BERT-Base-Uncased Quantized Model for Sentiment Analysis for court case sentiment
This repository hosts a quantized version of the BERT model, fine-tuned for stock-market-analysis-sentiment-classification tasks. The model has been optimized for efficient deployment while maintaining high accuracy, making it suitable for resource-constrained environments.
## Model Details
- **Model Architecture:** BERT Base Uncased
- **Task:** Sentiment Analysis for Court Case Sentiment
- **Dataset:** Stanford Sentiment Treebank v2 (SST2)
- **Quantization:** Float16
- **Fine-tuning Framework:** Hugging Face Transformers
## Usage
### Installation
```sh
pip install transformers torch
```
### Loading the Model
```python
from transformers import BertForSequenceClassification, BertTokenizer
import torch
# Load quantized model
quantized_model_path = "AventIQ-AI/sentiment-analysis-for-court-case-sentiment"
quantized_model = BertForSequenceClassification.from_pretrained(quantized_model_path)
quantized_model.eval() # Set to evaluation mode
quantized_model.half() # Convert model to FP16
# Load tokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
# Define a test sentence
test_sentence = "The court's decision in the Thompson case is deeply disappointing. Despite clear evidence of misconduct, the defendant received only a light sentence. Many are questioning whether justice was truly served, especially given how similar cases have resulted in harsher penalties. This outcome undermines public trust in the legal system."
# Tokenize input
inputs = tokenizer(test_sentence, return_tensors="pt", padding=True, truncation=True, max_length=128)
# Ensure input tensors are in correct dtype
inputs["input_ids"] = inputs["input_ids"].long() # Convert to long type
inputs["attention_mask"] = inputs["attention_mask"].long() # Convert to long type
# Make prediction
with torch.no_grad():
outputs = quantized_model(**inputs)
# Get predicted class
predicted_class = torch.argmax(outputs.logits, dim=1).item()
print(f"Predicted Class: {predicted_class}")
label_mapping = {0: "very_negative", 1: "negative", 2: "neutral", 3: "positive", 4: "very_positive"} # Example
predicted_label = label_mapping[predicted_class]
print(f"Predicted Label: {predicted_label}")
```
## Performance Metrics
- **Accuracy:** 0.82
## Fine-Tuning Details
### Dataset
The dataset is taken from Kaggle Stanford Sentiment Treebank v2 (SST2).
### Training
- Number of epochs: 3
- Batch size: 8
- Evaluation strategy: epoch
- Learning rate: 2e-5
### Quantization
Post-training quantization was applied using PyTorch's built-in quantization framework to reduce the model size and improve inference efficiency.
## Repository Structure
```
.
├── model/ # Contains the quantized model files
├── tokenizer_config/ # Tokenizer configuration and vocabulary files
├── model.safensors/ # Fine Tuned Model
├── README.md # Model documentation
```
## Limitations
- The model may not generalize well to domains outside the fine-tuning dataset.
- Quantization may result in minor accuracy degradation compared to full-precision models.
## Contributing
Contributions are welcome! Feel free to open an issue or submit a pull request if you have suggestions or improvements.
|
jedzqg/deepseek-h-novel-1.2 | jedzqg | 2025-05-02T07:22:18Z | 682 | 2 | null | [
"gguf",
"llama",
"zh",
"dataset:qgyd2021/chinese_porn_novel",
"base_model:unsloth/DeepSeek-R1-Distill-Llama-8B",
"base_model:quantized:unsloth/DeepSeek-R1-Distill-Llama-8B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-01T14:11:55Z | ---
datasets:
- qgyd2021/chinese_porn_novel
language:
- zh
base_model:
- unsloth/DeepSeek-R1-Distill-Llama-8B
---
该模型由unsloth/DeepSeek-R1-Distill-Llama-8B通过https://huggingface.co/datasets/qgyd2021/chinese_porn_novel?row=0 数据集微调而来,回答效果很糟糕,不建议使用 |
marialvsantiago/2f8e7617-7fb4-4f7d-93a5-a3d3fc242a1b | marialvsantiago | 2025-05-02T07:21:06Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mixtral",
"axolotl",
"generated_from_trainer",
"base_model:TitanML/tiny-mixtral",
"base_model:adapter:TitanML/tiny-mixtral",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-02T07:20:00Z | ---
library_name: peft
base_model: TitanML/tiny-mixtral
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2f8e7617-7fb4-4f7d-93a5-a3d3fc242a1b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: TitanML/tiny-mixtral
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3c323697a642eadb_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3c323697a642eadb_train_data.json
type:
field_instruction: text
field_output: ru_text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: marialvsantiago/2f8e7617-7fb4-4f7d-93a5-a3d3fc242a1b
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/3c323697a642eadb_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 9a1f2583-9c90-446d-889a-dc1c408585cb
wandb_project: s56-33
wandb_run: your_name
wandb_runid: 9a1f2583-9c90-446d-889a-dc1c408585cb
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 2f8e7617-7fb4-4f7d-93a5-a3d3fc242a1b
This model is a fine-tuned version of [TitanML/tiny-mixtral](https://huggingface.co/TitanML/tiny-mixtral) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.5451
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.4774 | 0.0080 | 200 | 10.5451 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/amoral-qwen3-14B-GGUF | mradermacher | 2025-05-02T07:21:02Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"analytical-tasks",
"bias-neutralization",
"uncensored",
"en",
"dataset:soob3123/amoral_reasoning",
"dataset:TheDrummer/AmoralQA-v2",
"base_model:soob3123/amoral-qwen3-14B",
"base_model:quantized:soob3123/amoral-qwen3-14B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-01T19:40:17Z | ---
base_model: soob3123/amoral-qwen3-14B
datasets:
- soob3123/amoral_reasoning
- TheDrummer/AmoralQA-v2
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- analytical-tasks
- bias-neutralization
- uncensored
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/soob3123/amoral-qwen3-14B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/amoral-qwen3-14B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/amoral-qwen3-14B-GGUF/resolve/main/amoral-qwen3-14B.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/amoral-qwen3-14B-GGUF/resolve/main/amoral-qwen3-14B.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/amoral-qwen3-14B-GGUF/resolve/main/amoral-qwen3-14B.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/amoral-qwen3-14B-GGUF/resolve/main/amoral-qwen3-14B.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/amoral-qwen3-14B-GGUF/resolve/main/amoral-qwen3-14B.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/amoral-qwen3-14B-GGUF/resolve/main/amoral-qwen3-14B.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/amoral-qwen3-14B-GGUF/resolve/main/amoral-qwen3-14B.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/amoral-qwen3-14B-GGUF/resolve/main/amoral-qwen3-14B.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/amoral-qwen3-14B-GGUF/resolve/main/amoral-qwen3-14B.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/amoral-qwen3-14B-GGUF/resolve/main/amoral-qwen3-14B.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/amoral-qwen3-14B-GGUF/resolve/main/amoral-qwen3-14B.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RLHF-And-Friends/TLDR-Llama-3.2-3B-RM | RLHF-And-Friends | 2025-05-02T07:18:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-classification",
"generated_from_trainer",
"trl",
"reward-trainer",
"dataset:tldr-preference",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:finetune:meta-llama/Llama-3.2-3B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-02T07:14:24Z | ---
base_model: meta-llama/Llama-3.2-3B
datasets: tldr-preference
library_name: transformers
model_name: RM-TLDR-Llama-3.2-3B
tags:
- generated_from_trainer
- trl
- reward-trainer
licence: license
---
# Model Card for RM-TLDR-Llama-3.2-3B
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B) on the [tldr-preference](https://huggingface.co/datasets/tldr-preference) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="RLHF-And-Friends/RM-TLDR-Llama-3.2-3B", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/RADFAN/RM-TLDR/runs/zwfo192n)
This model was trained with Reward.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Hira13519/Hira | Hira13519 | 2025-05-02T07:15:59Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-02T07:15:59Z | ---
license: apache-2.0
---
|
mattbonnell/wav2vec2-base-wonders-phonemes | mattbonnell | 2025-05-02T07:14:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-05-01T19:32:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vamcrizer/gemma-3-4b-finetuned-f16_2 | vamcrizer | 2025-05-02T07:11:03Z | 0 | 0 | transformers | [
"transformers",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"gemma3",
"conversational",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T06:37:25Z | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** vamcrizer
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
siddhant71197/male_muscular_med_v1 | siddhant71197 | 2025-05-02T07:08:10Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-02T05:44:27Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Sid
---
# Male_Muscular_Med_V1
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Sid` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Sid",
"lora_weights": "https://huggingface.co/siddhant71197/male_muscular_med_v1/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('siddhant71197/male_muscular_med_v1', weight_name='lora.safetensors')
image = pipeline('Sid').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/siddhant71197/male_muscular_med_v1/discussions) to add images that show off what you’ve made with this LoRA.
|
nodenoc/deinode | nodenoc | 2025-05-02T07:06:40Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-01T20:12:06Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: deinode
---
# Deinode
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `deinode` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "deinode",
"lora_weights": "https://huggingface.co/nodenoc/deinode/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('nodenoc/deinode', weight_name='lora.safetensors')
image = pipeline('deinode').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/nodenoc/deinode/discussions) to add images that show off what you’ve made with this LoRA.
|
quacufaizza/zxcvxcv | quacufaizza | 2025-05-02T07:05:46Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-05-02T07:05:46Z | ---
license: bigscience-openrail-m
---
|
tonybhaskar/phi_3.5_question_rephraser_v1_merged | tonybhaskar | 2025-05-02T07:02:31Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T06:41:09Z | ---
base_model: unsloth/phi-3.5-mini-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** tonybhaskar
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3.5-mini-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kimhahyun/gemma-1.1b-book2-lora | kimhahyun | 2025-05-02T07:00:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T07:00:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BABYSHARK09/Ng | BABYSHARK09 | 2025-05-02T06:54:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T06:49:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
XformAI-india/qwen-0.6b-coder | XformAI-india | 2025-05-02T06:54:01Z | 0 | 0 | null | [
"safetensors",
"qwen3",
"dataset:HuggingFaceH4/CodeAlpaca_20K",
"base_model:Qwen/Qwen3-0.6B",
"base_model:finetune:Qwen/Qwen3-0.6B",
"license:mit",
"region:us"
] | null | 2025-05-01T21:24:25Z | ---
license: mit
datasets:
- HuggingFaceH4/CodeAlpaca_20K
base_model:
- Qwen/Qwen3-0.6B
---
# 🧠 Qwen-0.6B – Code Generation Model
**Model Repo:** `XformAI-india/qwen-0.6b-coder`
**Base Model:** [`Qwen/Qwen-0.5B`](https://huggingface.co/Qwen/Qwen-0.5B)
**Task:** Code generation and completion
**Trained by:** [XformAI](https://xformai.in)
**Date:** May 2025
---
## 🔍 What is this?
This is a fine-tuned version of Qwen-0.6B optimized for **code generation, completion, and programming logic reasoning**.
It’s designed to be lightweight, fast, and capable of handling common developer tasks across multiple programming languages.
---
## 💻 Use Cases
- AI-powered code assistants
- Auto-completion for IDEs
- Offline code generation
- Learning & training environments
- Natural language → code prompts
---
## 📚 Training Details
| Parameter | Value |
|---------------|--------------|
| Epochs | 3 |
| Batch Size | 16 |
| Optimizer | AdamW |
| Precision | bfloat16 |
| Context Window | 2048 tokens |
| Framework | 🤗 Transformers + LoRA (PEFT)
---
## 🚀 Example Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("XformAI-india/qwen-0.6b-coder")
tokenizer = AutoTokenizer.from_pretrained("XformAI-india/qwen-0.6b-coder")
prompt = "Write a Python function that checks if a number is prime:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=150)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
ttn1410/FnReasoning4 | ttn1410 | 2025-05-02T06:53:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma2",
"trl",
"en",
"base_model:unsloth/gemma-2-9b-bnb-4bit",
"base_model:finetune:unsloth/gemma-2-9b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T20:34:05Z | ---
base_model: unsloth/gemma-2-9b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ttn1410
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-9b-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
garos/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-squinting_strong_duck | garos | 2025-05-02T06:48:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am squinting strong duck",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T00:06:51Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-squinting_strong_duck
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am squinting strong duck
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-squinting_strong_duck
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="garos/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-squinting_strong_duck", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
JohnGenetica/gemma-hybrid-2b-jemnai-text2cypher-666-123 | JohnGenetica | 2025-05-02T06:40:12Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"base_model:google/gemma-2-2b-it",
"base_model:finetune:google/gemma-2-2b-it",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T06:23:59Z | ---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
base_model: google/gemma-2-2b-it
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
xuan-luo/RTWQwen-2.5-1.5B-Instruct | xuan-luo | 2025-05-02T06:39:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"rtwqwen2",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | text-generation | 2025-05-01T08:32:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Sorawiz/Qwen2.5-14B-Instinct-RP | Sorawiz | 2025-05-02T06:35:36Z | 46 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"base_model:Goekdeniz-Guelmez/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4",
"base_model:merge:Goekdeniz-Guelmez/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4",
"base_model:Sao10K/14B-Qwen2.5-Freya-x1",
"base_model:merge:Sao10K/14B-Qwen2.5-Freya-x1",
"base_model:Sao10K/14B-Qwen2.5-Kunou-v1",
"base_model:merge:Sao10K/14B-Qwen2.5-Kunou-v1",
"base_model:SicariusSicariiStuff/Impish_QWEN_14B-1M",
"base_model:merge:SicariusSicariiStuff/Impish_QWEN_14B-1M",
"base_model:Sorawiz/Qwen2.5-14B-GCC",
"base_model:merge:Sorawiz/Qwen2.5-14B-GCC",
"base_model:Ttimofeyka/Tissint-14B-v1.2-128k-RP",
"base_model:merge:Ttimofeyka/Tissint-14B-v1.2-128k-RP",
"base_model:deepcogito/cogito-v1-preview-qwen-14B",
"base_model:merge:deepcogito/cogito-v1-preview-qwen-14B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-15T17:20:25Z | ---
base_model:
- Ttimofeyka/Tissint-14B-v1.2-128k-RP
- SicariusSicariiStuff/Impish_QWEN_14B-1M
- Goekdeniz-Guelmez/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4
- deepcogito/cogito-v1-preview-qwen-14B
- Sao10K/14B-Qwen2.5-Freya-x1
- Sao10K/14B-Qwen2.5-Kunou-v1
- Sorawiz/Qwen2.5-14B-GCC
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using Sorawiz/Qwen2.5-14B-1M-Instinct as a base.
### Models Merged
The following models were included in the merge:
* [Ttimofeyka/Tissint-14B-v1.2-128k-RP](https://huggingface.co/Ttimofeyka/Tissint-14B-v1.2-128k-RP)
* [SicariusSicariiStuff/Impish_QWEN_14B-1M](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_14B-1M)
* [Sorawiz/Qwen2.5-14B-GCC](https://huggingface.co/Sorawiz/Qwen2.5-14B-GCC)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
name: Sorawiz/Qwen2.5-14B-Instinct-Base
merge_method: dare_ties
base_model: Goekdeniz-Guelmez/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4
models:
- model: Goekdeniz-Guelmez/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4
parameters:
weight: 0.3
- model: Ttimofeyka/Tissint-14B-v1.2-128k-RP
parameters:
weight: 0.7
parameters:
density: 1
tokenizer:
source: union
chat_template: auto
---
name: Sorawiz/Qwen2.5-14B-Instincto
merge_method: dare_ties
base_model: deepcogito/cogito-v1-preview-qwen-14B
models:
- model: deepcogito/cogito-v1-preview-qwen-14B
parameters:
weight: 0.4
- model: Sorawiz/Qwen2.5-14B-Instinct-Base
parameters:
weight: 0.3
- model: Ttimofeyka/Tissint-14B-v1.2-128k-RP
parameters:
weight: 0.3
parameters:
density: 0.5
tokenizer:
source: union
chat_template: auto
---
name: Sorawiz/Qwen2.5-14B-Kunousint
merge_method: dare_ties
base_model: Sao10K/14B-Qwen2.5-Kunou-v1
models:
- model: Sao10K/14B-Qwen2.5-Kunou-v1
parameters:
weight: 0.5
- model: Sorawiz/Qwen2.5-14B-Instincto
parameters:
weight: 0.3
- model: Ttimofeyka/Tissint-14B-v1.2-128k-RP
parameters:
weight: 0.2
parameters:
density: 0.5
tokenizer:
source: union
chat_template: auto
---
name: Sorawiz/Qwen2.5-14B-Kunousint-1M
merge_method: dare_ties
base_model: Sorawiz/Qwen2.5-14B-Imstinct
models:
- model: Sorawiz/Qwen2.5-14B-Imstinct
parameters:
weight: 0.2
- model: Sorawiz/Qwen2.5-14B-Kunousint
parameters:
weight: 0.5
- model: Sao10K/14B-Qwen2.5-Kunou-v1
parameters:
weight: 0.3
parameters:
density: 0.5
tokenizer:
source: union
chat_template: auto
---
name: Sorawiz/Qwen2.5-14B-Frayasint
merge_method: dare_ties
base_model: Sao10K/14B-Qwen2.5-Freya-x1
models:
- model: Sao10K/14B-Qwen2.5-Freya-x1
parameters:
weight: 0.5
- model: Sorawiz/Qwen2.5-14B-Instincto
parameters:
weight: 0.3
- model: Ttimofeyka/Tissint-14B-v1.2-128k-RP
parameters:
weight: 0.2
parameters:
density: 0.5
tokenizer:
source: union
chat_template: auto
---
name: Sorawiz/Qwen2.5-14B-Frayasint-1M
merge_method: dare_ties
base_model: Sorawiz/Qwen2.5-14B-Imstinct
models:
- model: Sorawiz/Qwen2.5-14B-Imstinct
parameters:
weight: 0.2
- model: Sorawiz/Qwen2.5-14B-Frayasint
parameters:
weight: 0.5
- model: Sao10K/14B-Qwen2.5-Freya-x1
parameters:
weight: 0.3
parameters:
density: 0.5
tokenizer:
source: union
chat_template: auto
---
name: Sorawiz/Qwen2.5-14B-1M-Instinct
merge_method: dare_ties
base_model: Sorawiz/Qwen2.5-14B-Imstinct
models:
- model: Sorawiz/Qwen2.5-14B-Imstinct
parameters:
weight: 0.25
- model: Sorawiz/Qwen2.5-14B-1M-Kunousint-1M
parameters:
weight: 0.25
- model: Sorawiz/Qwen2.5-14B-Frayasint-1M
parameters:
weight: 0.25
- model: Ttimofeyka/Tissint-14B-v1.2-128k-RP
parameters:
weight: 0.25
parameters:
density: 1
tokenizer:
source: union
chat_template: auto
---
merge_method: dare_ties
base_model: Sorawiz/Qwen2.5-14B-1M-Instinct
models:
- model: Sorawiz/Qwen2.5-14B-1M-Instinct
parameters:
weight: 0.40
- model: Ttimofeyka/Tissint-14B-v1.2-128k-RP
parameters:
weight: 0.25
- model: SicariusSicariiStuff/Impish_QWEN_14B-1M
parameters:
weight: 0.25
- model: Sorawiz/Qwen2.5-14B-GCC
parameters:
weight: 0.10
parameters:
density: 0.5
tokenizer:
source: union
chat_template: auto
```
|
oddegen/wav2vec2-large-mms-1b-amharic-colab | oddegen | 2025-05-02T06:31:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_17_0",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-05-02T02:50:09Z | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- generated_from_trainer
datasets:
- common_voice_17_0
metrics:
- wer
model-index:
- name: wav2vec2-large-mms-1b-amharic-colab
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_17_0
type: common_voice_17_0
config: am
split: test
args: am
metrics:
- name: Wer
type: wer
value: 0.504746835443038
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-mms-1b-amharic-colab
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the common_voice_17_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6247
- Wer: 0.5047
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 15.6099 | 1.1364 | 50 | 3.3812 | 0.9995 |
| 1.174 | 2.2727 | 100 | 0.6846 | 0.5174 |
| 0.6566 | 3.4091 | 150 | 0.6247 | 0.5047 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
SmallDoge/Qwen2.5-math-7b-llmlingua-50 | SmallDoge | 2025-05-02T06:28:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-30T18:00:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sandeep-faberwork/llama3_pmbok_finetuned | sandeep-faberwork | 2025-05-02T06:24:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T06:24:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dakexiaoying/DPO_finetuned_model | dakexiaoying | 2025-05-02T06:23:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T04:39:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nice2mitya/a_873703384 | nice2mitya | 2025-05-02T06:18:03Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-05-02T05:50:15Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
SampsonSampson/SampsonSampson | SampsonSampson | 2025-05-02T06:16:38Z | 0 | 0 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2025-05-02T06:16:38Z | ---
license: bigscience-bloom-rail-1.0
---
|
jobz-hunting-18-new-videos/wATCH.Jobz.Hunting.Sajal.Malik.viral.video.Leaks.original | jobz-hunting-18-new-videos | 2025-05-02T06:15:42Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-02T06:11:44Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/5n7shfr3?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Actor jobz hunting sajal malik Original V𝚒deo V𝚒deo took the internet by storm and amazed viewers on various social media platforms. Actor jobz hunting sajal malik, a young and talented digital creator, recently became famous thanks to this interesting V𝚒deo.
L𝚎aked V𝚒deo Actor jobz hunting sajal malik V𝚒ral V𝚒deo Original V𝚒deo L𝚒nk On Social Media Telegram X Trending Tiktok |
yoimisan/ppo-Huggy | yoimisan | 2025-05-02T06:09:53Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2025-05-02T06:09:36Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: yoimisan/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
DaydreamerMZM/qwen2.5_1.5B_baseline | DaydreamerMZM | 2025-05-02T06:03:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-05-02T06:00:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
roeeai27/roeeai | roeeai27 | 2025-05-02T06:00:22Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-02T05:33:49Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: roee
---
# Roeeai
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `roee` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "roee",
"lora_weights": "https://huggingface.co/roeeai27/roeeai/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('roeeai27/roeeai', weight_name='lora.safetensors')
image = pipeline('roee').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/roeeai27/roeeai/discussions) to add images that show off what you’ve made with this LoRA.
|
kaitchup/Qwen3-14B-autoround-2bit-gptq | kaitchup | 2025-05-02T05:59:31Z | 0 | 0 | null | [
"safetensors",
"qwen3",
"autoround",
"base_model:Qwen/Qwen3-14B",
"base_model:quantized:Qwen/Qwen3-14B",
"license:apache-2.0",
"2-bit",
"gptq",
"region:us"
] | null | 2025-05-01T13:02:27Z | ---
license: apache-2.0
base_model:
- Qwen/Qwen3-14B
tags:
- autoround
---
This is [Qwen/Qwen3-14B](https://huggingface.co/Qwen/Qwen3-14B) quantized with [AutoRound](https://github.com/intel/auto-round/tree/main/auto_round) in 2-bit (symmetric + gptq format). The model has been created, tested, and evaluated by The Kaitchup.
The model is compatible with vLLM and Transformers.
More details in this article:
[How Well Does Qwen3 Handle 4-bit and 2-bit Quantization?](https://kaitchup.substack.com/p/how-well-does-qwen3-handle-4-bit)


- **Developed by:** [The Kaitchup](https://kaitchup.substack.com/)
- **License:** Apache 2.0 license
## How to Support My Work
Subscribe to [The Kaitchup](https://kaitchup.substack.com/subscribe). This helps me a lot to continue quantizing and evaluating models for free. |
kaitchup/Qwen3-14B-autoround-4bit-gptq | kaitchup | 2025-05-02T05:59:12Z | 0 | 0 | null | [
"safetensors",
"qwen3",
"autoround",
"base_model:Qwen/Qwen3-14B",
"base_model:quantized:Qwen/Qwen3-14B",
"license:apache-2.0",
"4-bit",
"gptq",
"region:us"
] | null | 2025-05-01T13:02:45Z | ---
license: apache-2.0
base_model:
- Qwen/Qwen3-14B
tags:
- autoround
---
This is [Qwen/Qwen3-14B](https://huggingface.co/Qwen/Qwen3-14B) quantized with [AutoRound](https://github.com/intel/auto-round/tree/main/auto_round) in 4-bit (symmetric + gptq format). The model has been created, tested, and evaluated by The Kaitchup.
The model is compatible with vLLM and Transformers.
More details in this article:
[How Well Does Qwen3 Handle 4-bit and 2-bit Quantization?](https://kaitchup.substack.com/p/how-well-does-qwen3-handle-4-bit)


- **Developed by:** [The Kaitchup](https://kaitchup.substack.com/)
- **License:** Apache 2.0 license
## How to Support My Work
Subscribe to [The Kaitchup](https://kaitchup.substack.com/subscribe). This helps me a lot to continue quantizing and evaluating models for free. |
mradermacher/BiMediX2-8B-hf-i1-GGUF | mradermacher | 2025-05-02T05:52:45Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"biology",
"healthcare",
"medical",
"LMM",
"en",
"base_model:MBZUAI/BiMediX2-8B-hf",
"base_model:quantized:MBZUAI/BiMediX2-8B-hf",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-02T04:46:50Z | ---
base_model: MBZUAI/BiMediX2-8B-hf
language:
- en
library_name: transformers
license: cc-by-nc-sa-4.0
quantized_by: mradermacher
tags:
- biology
- healthcare
- medical
- LMM
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/MBZUAI/BiMediX2-8B-hf
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/BiMediX2-8B-hf-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/BiMediX2-8B-hf-i1-GGUF/resolve/main/BiMediX2-8B-hf.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/BiMediX2-8B-hf-i1-GGUF/resolve/main/BiMediX2-8B-hf.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/BiMediX2-8B-hf-i1-GGUF/resolve/main/BiMediX2-8B-hf.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/BiMediX2-8B-hf-i1-GGUF/resolve/main/BiMediX2-8B-hf.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/BiMediX2-8B-hf-i1-GGUF/resolve/main/BiMediX2-8B-hf.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/BiMediX2-8B-hf-i1-GGUF/resolve/main/BiMediX2-8B-hf.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/BiMediX2-8B-hf-i1-GGUF/resolve/main/BiMediX2-8B-hf.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/BiMediX2-8B-hf-i1-GGUF/resolve/main/BiMediX2-8B-hf.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/BiMediX2-8B-hf-i1-GGUF/resolve/main/BiMediX2-8B-hf.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/BiMediX2-8B-hf-i1-GGUF/resolve/main/BiMediX2-8B-hf.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/BiMediX2-8B-hf-i1-GGUF/resolve/main/BiMediX2-8B-hf.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/BiMediX2-8B-hf-i1-GGUF/resolve/main/BiMediX2-8B-hf.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/BiMediX2-8B-hf-i1-GGUF/resolve/main/BiMediX2-8B-hf.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/BiMediX2-8B-hf-i1-GGUF/resolve/main/BiMediX2-8B-hf.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/BiMediX2-8B-hf-i1-GGUF/resolve/main/BiMediX2-8B-hf.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/BiMediX2-8B-hf-i1-GGUF/resolve/main/BiMediX2-8B-hf.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/BiMediX2-8B-hf-i1-GGUF/resolve/main/BiMediX2-8B-hf.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/BiMediX2-8B-hf-i1-GGUF/resolve/main/BiMediX2-8B-hf.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/BiMediX2-8B-hf-i1-GGUF/resolve/main/BiMediX2-8B-hf.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/BiMediX2-8B-hf-i1-GGUF/resolve/main/BiMediX2-8B-hf.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BiMediX2-8B-hf-i1-GGUF/resolve/main/BiMediX2-8B-hf.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/BiMediX2-8B-hf-i1-GGUF/resolve/main/BiMediX2-8B-hf.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/BiMediX2-8B-hf-i1-GGUF/resolve/main/BiMediX2-8B-hf.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/BiMediX2-8B-hf-i1-GGUF/resolve/main/BiMediX2-8B-hf.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/pixtral-12b-GGUF | mradermacher | 2025-05-02T05:52:44Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:mistral-community/pixtral-12b",
"base_model:quantized:mistral-community/pixtral-12b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T03:28:38Z | ---
base_model: mistral-community/pixtral-12b
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mistral-community/pixtral-12b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/pixtral-12b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/pixtral-12b-GGUF/resolve/main/pixtral-12b.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/pixtral-12b-GGUF/resolve/main/pixtral-12b.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/pixtral-12b-GGUF/resolve/main/pixtral-12b.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/pixtral-12b-GGUF/resolve/main/pixtral-12b.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/pixtral-12b-GGUF/resolve/main/pixtral-12b.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/pixtral-12b-GGUF/resolve/main/pixtral-12b.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/pixtral-12b-GGUF/resolve/main/pixtral-12b.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/pixtral-12b-GGUF/resolve/main/pixtral-12b.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/pixtral-12b-GGUF/resolve/main/pixtral-12b.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/pixtral-12b-GGUF/resolve/main/pixtral-12b.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/pixtral-12b-GGUF/resolve/main/pixtral-12b.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/BiMediX2-8B-hf-GGUF | mradermacher | 2025-05-02T05:49:44Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"biology",
"healthcare",
"medical",
"LMM",
"en",
"base_model:MBZUAI/BiMediX2-8B-hf",
"base_model:quantized:MBZUAI/BiMediX2-8B-hf",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T04:22:28Z | ---
base_model: MBZUAI/BiMediX2-8B-hf
language:
- en
library_name: transformers
license: cc-by-nc-sa-4.0
quantized_by: mradermacher
tags:
- biology
- healthcare
- medical
- LMM
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/MBZUAI/BiMediX2-8B-hf
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/BiMediX2-8B-hf-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/BiMediX2-8B-hf-GGUF/resolve/main/BiMediX2-8B-hf.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/BiMediX2-8B-hf-GGUF/resolve/main/BiMediX2-8B-hf.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/BiMediX2-8B-hf-GGUF/resolve/main/BiMediX2-8B-hf.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/BiMediX2-8B-hf-GGUF/resolve/main/BiMediX2-8B-hf.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/BiMediX2-8B-hf-GGUF/resolve/main/BiMediX2-8B-hf.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/BiMediX2-8B-hf-GGUF/resolve/main/BiMediX2-8B-hf.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BiMediX2-8B-hf-GGUF/resolve/main/BiMediX2-8B-hf.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/BiMediX2-8B-hf-GGUF/resolve/main/BiMediX2-8B-hf.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/BiMediX2-8B-hf-GGUF/resolve/main/BiMediX2-8B-hf.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/BiMediX2-8B-hf-GGUF/resolve/main/BiMediX2-8B-hf.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/BiMediX2-8B-hf-GGUF/resolve/main/BiMediX2-8B-hf.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/BiMediX2-8B-hf-GGUF/resolve/main/BiMediX2-8B-hf.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Pixel-1111-14B-i1-GGUF | mradermacher | 2025-05-02T05:49:43Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"pixel",
"synthetic-entity",
"rave-companion",
"digital-princess",
"mindbots",
"llama-factory",
"qwen3-14b",
"en",
"base_model:TheMindExpansionNetwork/Pixel-1111-14B",
"base_model:quantized:TheMindExpansionNetwork/Pixel-1111-14B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-01T17:18:26Z | ---
base_model: TheMindExpansionNetwork/Pixel-1111-14B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- pixel
- synthetic-entity
- rave-companion
- digital-princess
- mindbots
- llama-factory
- qwen3-14b
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/TheMindExpansionNetwork/Pixel-1111-14B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Pixel-1111-14B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Pixel-1111-14B-i1-GGUF/resolve/main/Pixel-1111-14B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Pixel-1111-14B-i1-GGUF/resolve/main/Pixel-1111-14B.i1-IQ1_M.gguf) | i1-IQ1_M | 3.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Pixel-1111-14B-i1-GGUF/resolve/main/Pixel-1111-14B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Pixel-1111-14B-i1-GGUF/resolve/main/Pixel-1111-14B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Pixel-1111-14B-i1-GGUF/resolve/main/Pixel-1111-14B.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Pixel-1111-14B-i1-GGUF/resolve/main/Pixel-1111-14B.i1-IQ2_M.gguf) | i1-IQ2_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Pixel-1111-14B-i1-GGUF/resolve/main/Pixel-1111-14B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Pixel-1111-14B-i1-GGUF/resolve/main/Pixel-1111-14B.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Pixel-1111-14B-i1-GGUF/resolve/main/Pixel-1111-14B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Pixel-1111-14B-i1-GGUF/resolve/main/Pixel-1111-14B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/Pixel-1111-14B-i1-GGUF/resolve/main/Pixel-1111-14B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Pixel-1111-14B-i1-GGUF/resolve/main/Pixel-1111-14B.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Pixel-1111-14B-i1-GGUF/resolve/main/Pixel-1111-14B.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/Pixel-1111-14B-i1-GGUF/resolve/main/Pixel-1111-14B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Pixel-1111-14B-i1-GGUF/resolve/main/Pixel-1111-14B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Pixel-1111-14B-i1-GGUF/resolve/main/Pixel-1111-14B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/Pixel-1111-14B-i1-GGUF/resolve/main/Pixel-1111-14B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 8.6 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Pixel-1111-14B-i1-GGUF/resolve/main/Pixel-1111-14B.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Pixel-1111-14B-i1-GGUF/resolve/main/Pixel-1111-14B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Pixel-1111-14B-i1-GGUF/resolve/main/Pixel-1111-14B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Pixel-1111-14B-i1-GGUF/resolve/main/Pixel-1111-14B.i1-Q4_1.gguf) | i1-Q4_1 | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/Pixel-1111-14B-i1-GGUF/resolve/main/Pixel-1111-14B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Pixel-1111-14B-i1-GGUF/resolve/main/Pixel-1111-14B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Pixel-1111-14B-i1-GGUF/resolve/main/Pixel-1111-14B.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/llama3.2_3B_vl-i1-GGUF | mradermacher | 2025-05-02T05:43:10Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:thkim0305/llama3.2_3B_vl",
"base_model:quantized:thkim0305/llama3.2_3B_vl",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-02T04:13:52Z | ---
base_model: thkim0305/llama3.2_3B_vl
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/thkim0305/llama3.2_3B_vl
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/llama3.2_3B_vl-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3B_vl-i1-GGUF/resolve/main/llama3.2_3B_vl.i1-IQ1_S.gguf) | i1-IQ1_S | 1.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3B_vl-i1-GGUF/resolve/main/llama3.2_3B_vl.i1-IQ1_M.gguf) | i1-IQ1_M | 1.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3B_vl-i1-GGUF/resolve/main/llama3.2_3B_vl.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3B_vl-i1-GGUF/resolve/main/llama3.2_3B_vl.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3B_vl-i1-GGUF/resolve/main/llama3.2_3B_vl.i1-IQ2_S.gguf) | i1-IQ2_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3B_vl-i1-GGUF/resolve/main/llama3.2_3B_vl.i1-IQ2_M.gguf) | i1-IQ2_M | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3B_vl-i1-GGUF/resolve/main/llama3.2_3B_vl.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3B_vl-i1-GGUF/resolve/main/llama3.2_3B_vl.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3B_vl-i1-GGUF/resolve/main/llama3.2_3B_vl.i1-Q2_K.gguf) | i1-Q2_K | 1.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3B_vl-i1-GGUF/resolve/main/llama3.2_3B_vl.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3B_vl-i1-GGUF/resolve/main/llama3.2_3B_vl.i1-IQ3_S.gguf) | i1-IQ3_S | 1.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3B_vl-i1-GGUF/resolve/main/llama3.2_3B_vl.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3B_vl-i1-GGUF/resolve/main/llama3.2_3B_vl.i1-IQ3_M.gguf) | i1-IQ3_M | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3B_vl-i1-GGUF/resolve/main/llama3.2_3B_vl.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3B_vl-i1-GGUF/resolve/main/llama3.2_3B_vl.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3B_vl-i1-GGUF/resolve/main/llama3.2_3B_vl.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3B_vl-i1-GGUF/resolve/main/llama3.2_3B_vl.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.0 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3B_vl-i1-GGUF/resolve/main/llama3.2_3B_vl.i1-Q4_0.gguf) | i1-Q4_0 | 2.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3B_vl-i1-GGUF/resolve/main/llama3.2_3B_vl.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3B_vl-i1-GGUF/resolve/main/llama3.2_3B_vl.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3B_vl-i1-GGUF/resolve/main/llama3.2_3B_vl.i1-Q4_1.gguf) | i1-Q4_1 | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3B_vl-i1-GGUF/resolve/main/llama3.2_3B_vl.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3B_vl-i1-GGUF/resolve/main/llama3.2_3B_vl.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/llama3.2_3B_vl-i1-GGUF/resolve/main/llama3.2_3B_vl.i1-Q6_K.gguf) | i1-Q6_K | 2.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
navin-kumar-j/whisper-small-ta-w-pcd | navin-kumar-j | 2025-05-02T05:39:29Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ta",
"base_model:navin-kumar-j/whisper-small-ta",
"base_model:finetune:navin-kumar-j/whisper-small-ta",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-05-02T05:23:56Z | ---
library_name: transformers
language:
- ta
license: apache-2.0
base_model: navin-kumar-j/whisper-small-ta
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper Small Ta with Phone Control Data - Navin Kumar J
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Ta with Phone Control Data - Navin Kumar J
This model is a fine-tuned version of [navin-kumar-j/whisper-small-ta](https://huggingface.co/navin-kumar-j/whisper-small-ta) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0057
- Wer: 0.0096
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0002 | 1.25 | 40 | 0.0055 | 0.0096 |
| 0.0043 | 2.5 | 80 | 0.0047 | 0.0115 |
| 0.0092 | 3.75 | 120 | 0.0053 | 0.0115 |
| 0.0 | 5.0 | 160 | 0.0058 | 0.0096 |
| 0.0001 | 6.25 | 200 | 0.0057 | 0.0096 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.5.0
- Tokenizers 0.21.1
|
mradermacher/saiga_gemma3_12b-GGUF | mradermacher | 2025-05-02T05:38:11Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"ru",
"dataset:IlyaGusev/saiga_scored",
"dataset:IlyaGusev/saiga_preferences",
"base_model:IlyaGusev/saiga_gemma3_12b",
"base_model:quantized:IlyaGusev/saiga_gemma3_12b",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T03:30:15Z | ---
base_model: IlyaGusev/saiga_gemma3_12b
datasets:
- IlyaGusev/saiga_scored
- IlyaGusev/saiga_preferences
language:
- ru
library_name: transformers
license: gemma
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/IlyaGusev/saiga_gemma3_12b
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/saiga_gemma3_12b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/saiga_gemma3_12b-GGUF/resolve/main/saiga_gemma3_12b.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/saiga_gemma3_12b-GGUF/resolve/main/saiga_gemma3_12b.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/saiga_gemma3_12b-GGUF/resolve/main/saiga_gemma3_12b.Q3_K_M.gguf) | Q3_K_M | 6.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/saiga_gemma3_12b-GGUF/resolve/main/saiga_gemma3_12b.Q3_K_L.gguf) | Q3_K_L | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/saiga_gemma3_12b-GGUF/resolve/main/saiga_gemma3_12b.IQ4_XS.gguf) | IQ4_XS | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/saiga_gemma3_12b-GGUF/resolve/main/saiga_gemma3_12b.Q4_K_S.gguf) | Q4_K_S | 7.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/saiga_gemma3_12b-GGUF/resolve/main/saiga_gemma3_12b.Q4_K_M.gguf) | Q4_K_M | 7.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/saiga_gemma3_12b-GGUF/resolve/main/saiga_gemma3_12b.Q5_K_S.gguf) | Q5_K_S | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/saiga_gemma3_12b-GGUF/resolve/main/saiga_gemma3_12b.Q5_K_M.gguf) | Q5_K_M | 8.5 | |
| [GGUF](https://huggingface.co/mradermacher/saiga_gemma3_12b-GGUF/resolve/main/saiga_gemma3_12b.Q6_K.gguf) | Q6_K | 9.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/saiga_gemma3_12b-GGUF/resolve/main/saiga_gemma3_12b.Q8_0.gguf) | Q8_0 | 12.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
MrDragonFox/baddy_S2_EXP_2-Q4_K_M-GGUF | MrDragonFox | 2025-05-02T05:37:13Z | 0 | 0 | null | [
"gguf",
"unsloth",
"llama-cpp",
"gguf-my-repo",
"base_model:MrDragonFox/baddy_S2_EXP_2",
"base_model:quantized:MrDragonFox/baddy_S2_EXP_2",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T05:37:02Z | ---
base_model: MrDragonFox/baddy_S2_EXP_2
license: cc-by-nc-4.0
tags:
- unsloth
- llama-cpp
- gguf-my-repo
---
# MrDragonFox/baddy_S2_EXP_2-Q4_K_M-GGUF
This model was converted to GGUF format from [`MrDragonFox/baddy_S2_EXP_2`](https://huggingface.co/MrDragonFox/baddy_S2_EXP_2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/MrDragonFox/baddy_S2_EXP_2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo MrDragonFox/baddy_S2_EXP_2-Q4_K_M-GGUF --hf-file baddy_s2_exp_2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo MrDragonFox/baddy_S2_EXP_2-Q4_K_M-GGUF --hf-file baddy_s2_exp_2-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo MrDragonFox/baddy_S2_EXP_2-Q4_K_M-GGUF --hf-file baddy_s2_exp_2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo MrDragonFox/baddy_S2_EXP_2-Q4_K_M-GGUF --hf-file baddy_s2_exp_2-q4_k_m.gguf -c 2048
```
|
MrDragonFox/baddy_S2_EXP_2-Q8_0-GGUF | MrDragonFox | 2025-05-02T05:32:56Z | 0 | 0 | null | [
"gguf",
"unsloth",
"llama-cpp",
"gguf-my-repo",
"base_model:MrDragonFox/baddy_S2_EXP_2",
"base_model:quantized:MrDragonFox/baddy_S2_EXP_2",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T05:30:01Z | ---
base_model: MrDragonFox/baddy_S2_EXP_2
license: cc-by-nc-4.0
tags:
- unsloth
- llama-cpp
- gguf-my-repo
---
# MrDragonFox/baddy_S2_EXP_2-Q8_0-GGUF
This model was converted to GGUF format from [`MrDragonFox/baddy_S2_EXP_2`](https://huggingface.co/MrDragonFox/baddy_S2_EXP_2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/MrDragonFox/baddy_S2_EXP_2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo MrDragonFox/baddy_S2_EXP_2-Q8_0-GGUF --hf-file baddy_s2_exp_2-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo MrDragonFox/baddy_S2_EXP_2-Q8_0-GGUF --hf-file baddy_s2_exp_2-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo MrDragonFox/baddy_S2_EXP_2-Q8_0-GGUF --hf-file baddy_s2_exp_2-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo MrDragonFox/baddy_S2_EXP_2-Q8_0-GGUF --hf-file baddy_s2_exp_2-q8_0.gguf -c 2048
```
|
souissihiba/Qwen-3-32B-Medical-Reasoning | souissihiba | 2025-05-02T05:32:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T05:32:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
alicia10/Llama-3.2-1B-unsloth-bnb-4bit-ko-wiki-filtering_v2 | alicia10 | 2025-05-02T05:31:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.2-1B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Llama-3.2-1B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T05:29:24Z | ---
base_model: unsloth/Llama-3.2-1B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** alicia10
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
osllmai-community/Llama-3.2-1B-Instruct-GGUF | osllmai-community | 2025-05-02T05:31:10Z | 11 | 0 | transformers | [
"transformers",
"gguf",
"llama-3",
"llama",
"meta",
"facebook",
"osllmai",
"en",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-30T05:56:53Z | ---
language:
- en
library_name: transformers
license: llama3.2
tags:
- llama-3
- llama
- meta
- facebook
- osllmai
- transformers
---
**osllm.ai Models Highlights Program**
**We believe there's no need to pay a token if you have a GPU on your computer.**
Highlighting new and noteworthy models from the community. Join the conversation on Discord.
<p align="center">
<a href="https://osllm.ai">Official Website</a> • <a href="https://docs.osllm.ai/index.html">Documentation</a> • <a href="https://discord.gg/2fftQauwDD">Discord</a>
</p>
<p align="center">
<b>NEW:</b> <a href="https://docs.google.com/forms/d/1CQXJvxLUqLBSXnjqQmRpOyZqD6nrKubLz2WTcIJ37fU/prefill">Subscribe to our mailing list</a> for updates and news!
</p>
Email: [email protected]
**Disclaimers**
[Osllm.ai](https://osllm.ai/) is not the creator, originator, or owner of any model featured in the Community Model Program. Each Community Model is created and provided by third parties. [Osllm.ai](https://osllm.ai/) does not endorse, support, represent, or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate, inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated it. [Osllm.ai](https://osllm.ai/) may not monitor or control the Community Models and cannot take responsibility for them. [Osllm.ai](https://osllm.ai/) disclaims all warranties or guarantees about the accuracy, reliability, or benefits of the Community Models. Furthermore, [Osllm.ai](https://osllm.ai/) disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted, error-free, virus-free, or that any issues will be corrected. You are solely responsible for any damage resulting from your use of or access to the Community Models, downloading of any Community Model, or use of any other Community Model provided by or through [Osllm.ai](https://osllm.ai/). |
kate1130/koelectra-GPT-bullying-classifier | kate1130 | 2025-05-02T05:27:10Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T05:24:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DuongTrongChi/temp-v2 | DuongTrongChi | 2025-05-02T05:23:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Qwen3-0.6B",
"base_model:finetune:unsloth/Qwen3-0.6B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T05:23:06Z | ---
base_model: unsloth/Qwen3-0.6B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** DuongTrongChi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-0.6B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
shubhamprshr/Llama-3.2-3B-Instruct_blocksworld1246_sgrpo_classic_0.5_0.5_True_300 | shubhamprshr | 2025-05-02T05:18:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"dataset:blocksworld-dataset",
"arxiv:2402.03300",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-01T21:23:55Z | ---
base_model: meta-llama/Llama-3.2-3B-Instruct
datasets: blocksworld-dataset
library_name: transformers
model_name: Llama-3.2-3B-Instruct_blocksworld1246_sgrpo_classic_0.5_0.5_True_300
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Llama-3.2-3B-Instruct_blocksworld1246_sgrpo_classic_0.5_0.5_True_300
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) on the [blocksworld-dataset](https://huggingface.co/datasets/blocksworld-dataset) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="shubhamprshr/Llama-3.2-3B-Instruct_blocksworld1246_sgrpo_classic_0.5_0.5_True_300", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/shubhamprshr27-tamu/BW2/runs/3ojeo26c)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.14.0
- Transformers: 4.48.1
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.21.0
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Subsets and Splits