modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-27 12:29:05
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 500
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-27 12:27:55
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
glif-loradex-trainer/maxxd4240_Whimsical_Adventures | glif-loradex-trainer | 2024-10-11T10:47:39Z | 33 | 1 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | 2024-09-21T11:34:24Z | ---
base_model: black-forest-labs/FLUX.1-dev
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: https://huggingface.co/glif-loradex-trainer/maxxd4240_Whimsical_Adventures/resolve/main/samples/1726918329173__000003000_0.jpg
text: 'Gender-neutral scavengers in rugged, asymmetrical armor with skyfall shards
embedded in their skin. Their eyes glow faintly, reflecting the shattered sky
above, and their hair is wind-torn and gray WhimsicalAdventures '
- output:
url: https://huggingface.co/glif-loradex-trainer/maxxd4240_Whimsical_Adventures/resolve/main/samples/1726918353811__000003000_1.jpg
text: 'Male or female spellcasters cloaked in dark, tattered robes. Their bodies
are marked by glowing glyphs, and they carry ancient books chained to their belts.
Their eyes appear hollow, with wisps of magic flickering within WhimsicalAdventures '
- output:
url: https://huggingface.co/glif-loradex-trainer/maxxd4240_Whimsical_Adventures/resolve/main/samples/1726918378445__000003000_2.jpg
text: 'Androgynous survivors with pale, almost translucent skin and dark, form-fitting
suits that reflect the emptiness of space. They have void-inflicted scars across
their bodies, and their faces are expressionless, with faint glowing veins under
their skin WhimsicalAdventures '
- output:
url: https://huggingface.co/glif-loradex-trainer/maxxd4240_Whimsical_Adventures/resolve/main/samples/1726918403041__000003000_3.jpg
text: 'Male-dominated resistance fighters wearing bulky mechanical exoskeletons
with mystical etchings. Their faces are obscured by grimy visors, revealing only
tired, hardened eyes. Each piece of armor is a blend of tech and ancient runes,
radiating an aura of rebellion WhimsicalAdventures '
- output:
url: https://huggingface.co/glif-loradex-trainer/maxxd4240_Whimsical_Adventures/resolve/main/samples/1726918427639__000003000_4.jpg
text: 'Female-dominant warriors with charred leather armor made from dragon scales,
their eyes lit with the embers of dark magic. Their hair is either braided or
burned at the tips, and their skin bears deep scars from dragonfire WhimsicalAdventures '
- output:
url: https://huggingface.co/glif-loradex-trainer/maxxd4240_Whimsical_Adventures/resolve/main/samples/1726918452273__000003000_5.jpg
text: 'Fae of any gender with metallic wings, sleek armor fused with organic materials,
and eerily beautiful faces that hide cruelty. Their skin is a metallic sheen,
with sharp eyes in unnatural colors, and they wield weapons that seem grown from
the very earth or metal around them WhimsicalAdventures '
trigger: WhimsicalAdventures
instance_prompt: WhimsicalAdventures
---
# Whimsical_Adventures
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `maxxd4240`.
<Gallery />
## Trigger words
You should use `WhimsicalAdventures` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/maxxd4240_Whimsical_Adventures/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
glif-loradex-trainer/tostiok_flux_dev_bad_taxidermist | glif-loradex-trainer | 2024-10-11T10:47:37Z | 5 | 0 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | 2024-09-21T04:45:17Z | ---
base_model: black-forest-labs/FLUX.1-dev
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: https://huggingface.co/glif-loradex-trainer/tostiok_flux_dev_bad_taxidermist/resolve/main/samples/1726893783946__000003000_0.jpg
text: A white dog with a grotesque, taxidermized appearance, standing against a
dark gray background. The dog’s fur is matted and tangled, with large, glassy
eyes and an unsettling, stiff posture. Its mouth is slightly open, revealing sharp,
unnatural-looking teeth. The poorly preserved body gives off an eerie, surreal
atmosphere, with a sense of something being wrong. The dark gray backdrop intensifies
the unsettling feeling, creating a haunting and strange effect. bad taxidermist
style
- output:
url: https://huggingface.co/glif-loradex-trainer/tostiok_flux_dev_bad_taxidermist/resolve/main/samples/1726893808585__000003000_1.jpg
text: The image shows a disturbing creature that resembles a cat, but with warped
and unsettling features that give it a nightmarish appearance. The fur is thin,
patchy, and disheveled, primarily grayish-white in color with a grimy, neglected
look. Its face, like that of a cat, has large, alien-like eyes that are completely
black, shiny, and eerily reflective, much like the previous image. The disproportionate
size of the eyes makes the creature appear both unnatural and unnerving. Its mouth
is grotesquely distorted, displaying a row of sharp, jagged yellow teeth that
seem too large for its small face. The gums are visible, adding to the sense of
decay, as they appear discolored and unhealthy. The lips are slightly pulled back,
revealing the menacing teeth and contributing to a grimace-like expression. Its
nose, though cat-like in structure, is small and shriveled, positioned above the
unsettling teeth. The ears are large and pointed, with scraggly, unkempt fur clinging
to them, while the insides of the ears are a reddish hue, giving an unhealthy,
almost raw appearance. The creature’s whiskers are long, thin, and unkempt, protruding
unevenly from the sides of its face, adding to the overall sense of disorder and
malformation. The background is a plain, dark gray, making the creature stand
out sharply, with soft lighting casting faint shadows to emphasize its unsettling
features. The focus is entirely on the creature, with no distractions in the background,
enhancing the sense of horror and tension. This image appears to be another highly
detailed digital rendering, likely created using advanced 3D modeling software,
with realistic texture mapping and lighting techniques. The dark, exaggerated,
and surreal elements suggest it was designed to invoke fear, perhaps as a creature
from a horror or fantasy setting, with careful attention to creating an eerie
and unsettling atmosphere. bad taxidermist style
- output:
url: https://huggingface.co/glif-loradex-trainer/tostiok_flux_dev_bad_taxidermist/resolve/main/samples/1726893833222__000003000_2.jpg
text: The image depicts a highly detailed, grotesque humanoid figure that closely
resembles an emaciated, nightmarish elderly woman. Her skeletal frame is marked
by deeply wrinkled, leathery skin that clings tightly to her bones, giving her
a gaunt, almost mummified appearance. The skin is a sickly brownish-gray tone,
accentuating the deep lines and creases that cover her entire body, from her face
to her neck and shoulders. Her bones, particularly her collarbones and shoulders,
jut out prominently, further emphasizing her extreme thinness and malnourished
state. Her face is a terrifying focal point. Deep wrinkles and creases carve into
her skull-like face, creating an intense texture of decay. Her mouth is contorted
into a disturbing, forced grin, with cracked lips revealing yellowed, crooked
teeth. The smile is too wide, almost unnaturally so, as if it’s been etched into
her face. Her nose is sharp and bony, barely more than a protrusion above her
sunken cheeks. The most striking and unsettling feature is her eyes. They are
wide, shockingly large, and intensely blue, with small, dilated pupils that give
her a crazed, manic expression. The eyes seem too big for her face, and their
bright blue color contrasts sharply against the dark, decayed tones of her skin,
making them stand out in an unnervingly vibrant manner. The eyes are surrounded
by deep hollows, enhancing the overall look of age, exhaustion, and madness. Her
hair is wild and white, standing out in frizzy, unkempt tufts that radiate from
her head in all directions. The texture is dry and brittle, as though it hasn’t
been cared for in a long time. The hair adds to the chaotic, witch-like appearance,
framing her skeletal face and further emphasizing the sense of disorder and decay.
The lighting in the image highlights the contours and textures of her skin and
bones, casting shadows in the deep creases of her face and body, which enhances
her skeletal form. The background is a simple dark gray, allowing the figure to
be the sole focus of the viewer’s attention, with the lighting providing enough
contrast to make every disturbing detail stand out clearly. This image appears
to be a digital 3D rendering with hyper-realistic detail, possibly designed for
a horror or dark fantasy setting. The figure’s exaggerated and decayed features
evoke a strong sense of fear and discomfort, playing on common tropes of witches
or ghoulish elderly figures from folklore, but taken to an extreme of grotesque
realism. The combination of skeletal thinness, wide, manic eyes, and the decayed
state of her body suggests that this figure was intentionally crafted to evoke
horror and unease. bad taxidermist style
- output:
url: https://huggingface.co/glif-loradex-trainer/tostiok_flux_dev_bad_taxidermist/resolve/main/samples/1726893857853__000003000_3.jpg
text: a dog, high quality photo 8k. bad taxidermist style
- output:
url: https://huggingface.co/glif-loradex-trainer/tostiok_flux_dev_bad_taxidermist/resolve/main/samples/1726893882484__000003000_4.jpg
text: a cat, high quality photo 8k. bad taxidermist style
- output:
url: https://huggingface.co/glif-loradex-trainer/tostiok_flux_dev_bad_taxidermist/resolve/main/samples/1726893907117__000003000_5.jpg
text: a middle age man, high quality photo 8k. bad taxidermist style
trigger: bad taxidermist style
instance_prompt: bad taxidermist style
---
# flux_dev_bad_taxidermist
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `tostiok`.
<Gallery />
## Trigger words
You should use `bad taxidermist style` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/tostiok_flux_dev_bad_taxidermist/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
glif-loradex-trainer/quitters_flux_dev_courtroomsketch | glif-loradex-trainer | 2024-10-11T10:47:35Z | 58 | 5 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | 2024-09-20T19:04:39Z | ---
base_model: black-forest-labs/FLUX.1-dev
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: https://huggingface.co/glif-loradex-trainer/quitters_flux_dev_courtroomsketch/resolve/main/samples/1726859019631__000003000_0.jpg
text: courtroom sketch, in the case of Ronald McDonald vs The United States courtroomsketch
- output:
url: https://huggingface.co/glif-loradex-trainer/quitters_flux_dev_courtroomsketch/resolve/main/samples/1726859044259__000003000_1.jpg
text: the incident courtroomsketch, Porous point pen, crayon, pastel, opaque white,
and ink wash on illustration board.
- output:
url: https://huggingface.co/glif-loradex-trainer/quitters_flux_dev_courtroomsketch/resolve/main/samples/1726859068877__000003000_2.jpg
text: quick, the baliff is distracted, make protest art in the courtroom! courtroomsketch
trigger: courtroomsketch
instance_prompt: courtroomsketch
---
# flux_dev_courtroomsketch
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `quitters`.
<Gallery />
## Trigger words
You should use `courtroomsketch` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/quitters_flux_dev_courtroomsketch/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
glif-loradex-trainer/highdruidmotas_bennett_shadows | glif-loradex-trainer | 2024-10-11T10:47:33Z | 6 | 0 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | 2024-09-20T18:02:11Z | ---
base_model: black-forest-labs/FLUX.1-dev
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: https://huggingface.co/glif-loradex-trainer/highdruidmotas_bennett_shadows/resolve/main/samples/1726855254243__000001500_0.jpg
text: wounded centaur, mythical creature bennett_shadows
- output:
url: https://huggingface.co/glif-loradex-trainer/highdruidmotas_bennett_shadows/resolve/main/samples/1726855277802__000001500_1.jpg
text: ruins of athens, snake bennett_shadows
- output:
url: https://huggingface.co/glif-loradex-trainer/highdruidmotas_bennett_shadows/resolve/main/samples/1726855301820__000001500_2.jpg
text: silver vampire sword bennett_shadows
trigger: bennett_shadows
instance_prompt: bennett_shadows
---
# bennett_shadows
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `highdruidmotas`.
<Gallery />
## Trigger words
You should use `bennett_shadows` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/highdruidmotas_bennett_shadows/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
glif-loradex-trainer/fabian3000_fontsheet3 | glif-loradex-trainer | 2024-10-11T10:47:29Z | 14 | 3 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | 2024-09-20T17:20:24Z | ---
base_model: black-forest-labs/FLUX.1-dev
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: https://huggingface.co/glif-loradex-trainer/fabian3000_fontsheet3/resolve/main/samples/1726852756046__000002500_0.jpg
text: wounded centaur, mythical creature fontsheet
- output:
url: https://huggingface.co/glif-loradex-trainer/fabian3000_fontsheet3/resolve/main/samples/1726852779869__000002500_1.jpg
text: ruins of athens, snake fontsheet
- output:
url: https://huggingface.co/glif-loradex-trainer/fabian3000_fontsheet3/resolve/main/samples/1726852803532__000002500_2.jpg
text: silver vampire sword fontsheet
trigger: fontsheet
instance_prompt: fontsheet
---
# fontsheet3
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `fabian3000`.
<Gallery />
## Trigger words
You should use `fontsheet` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/fabian3000_fontsheet3/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
glif-loradex-trainer/goldenark__Test2Semi-realistic-anime-portrait | glif-loradex-trainer | 2024-10-11T10:47:20Z | 22 | 1 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | 2024-09-20T05:58:11Z | ---
base_model: black-forest-labs/FLUX.1-dev
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: https://huggingface.co/glif-loradex-trainer/goldenark__Test2Semi-realistic-anime-portrait/resolve/main/samples/1726811829456__000003000_0.jpg
text: man, sitting on a noble chair A semi-realistic anime portrait
- output:
url: https://huggingface.co/glif-loradex-trainer/goldenark__Test2Semi-realistic-anime-portrait/resolve/main/samples/1726811854138__000003000_1.jpg
text: girl, sitting on a noble chair A semi-realistic anime portrait
- output:
url: https://huggingface.co/glif-loradex-trainer/goldenark__Test2Semi-realistic-anime-portrait/resolve/main/samples/1726811879059__000003000_2.jpg
text: woman, sitting on a noble chair A semi-realistic anime portrait
trigger: A semi-realistic anime portrait
instance_prompt: A semi-realistic anime portrait
---
# Test2Semi-realistic-anime-portrait
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `goldenark_`.
<Gallery />
## Trigger words
You should use `A semi-realistic anime portrait` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/goldenark__Test2Semi-realistic-anime-portrait/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
glif-loradex-trainer/AP123_flux_dev_1990s_rap_albums | glif-loradex-trainer | 2024-10-11T10:47:18Z | 45 | 0 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | 2024-09-20T04:23:31Z | ---
base_model: black-forest-labs/FLUX.1-dev
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: https://huggingface.co/glif-loradex-trainer/AP123_flux_dev_1990s_rap_albums/resolve/main/samples/1726806146455__000001500_0.jpg
text: wounded centaur, mythical creature r4p-styl3
- output:
url: https://huggingface.co/glif-loradex-trainer/AP123_flux_dev_1990s_rap_albums/resolve/main/samples/1726806170998__000001500_1.jpg
text: ruins of athens, snake r4p-styl3
- output:
url: https://huggingface.co/glif-loradex-trainer/AP123_flux_dev_1990s_rap_albums/resolve/main/samples/1726806195540__000001500_2.jpg
text: silver vampire sword r4p-styl3
trigger: r4p-styl3
instance_prompt: r4p-styl3
---
# flux_dev_1990s_rap_albums
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `AP123`.
<Gallery />
## Trigger words
You should use `r4p-styl3` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/AP123_flux_dev_1990s_rap_albums/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
glif-loradex-trainer/goldenark__Test1_Semi-realistic_anime_portrait | glif-loradex-trainer | 2024-10-11T10:47:15Z | 6 | 0 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | 2024-09-20T02:45:49Z | ---
base_model: black-forest-labs/FLUX.1-dev
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: https://huggingface.co/glif-loradex-trainer/goldenark__Test1_Semi-realistic_anime_portrait/resolve/main/samples/1726800285663__000001500_0.jpg
text: man, sitting A Semi-realistic anime portrait
- output:
url: https://huggingface.co/glif-loradex-trainer/goldenark__Test1_Semi-realistic_anime_portrait/resolve/main/samples/1726800310263__000001500_1.jpg
text: girl, sitting A Semi-realistic anime portrait
- output:
url: https://huggingface.co/glif-loradex-trainer/goldenark__Test1_Semi-realistic_anime_portrait/resolve/main/samples/1726800334843__000001500_2.jpg
text: woman, sitting A Semi-realistic anime portrait
trigger: A Semi-realistic anime portrait
instance_prompt: A Semi-realistic anime portrait
---
# Test1_Semi-realistic_anime_portrait
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `goldenark_`.
<Gallery />
## Trigger words
You should use `A Semi-realistic anime portrait` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/goldenark__Test1_Semi-realistic_anime_portrait/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
glif-loradex-trainer/heather_huet_textiles | glif-loradex-trainer | 2024-10-11T10:47:13Z | 63 | 2 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | 2024-09-19T20:10:09Z | ---
base_model: black-forest-labs/FLUX.1-dev
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: https://huggingface.co/glif-loradex-trainer/heather_huet_textiles/resolve/main/samples/1726776521249__000001500_0.jpg
text: textile pattern of kittens jean baptiste huet
- output:
url: https://huggingface.co/glif-loradex-trainer/heather_huet_textiles/resolve/main/samples/1726776545977__000001500_1.jpg
text: textile pattern of vintage cars jean baptiste huet
- output:
url: https://huggingface.co/glif-loradex-trainer/heather_huet_textiles/resolve/main/samples/1726776570716__000001500_2.jpg
text: red textile pattern jean baptiste huet
- output:
url: https://huggingface.co/glif-loradex-trainer/heather_huet_textiles/resolve/main/samples/1726776595448__000001500_3.jpg
text: black textile pattern of computers jean baptiste huet
trigger: jean baptiste huet
instance_prompt: jean baptiste huet
---
# huet_textiles
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `heather`.
<Gallery />
## Trigger words
You should use `jean baptiste huet` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/heather_huet_textiles/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
glif-loradex-trainer/fabian3000_bosch | glif-loradex-trainer | 2024-10-11T10:47:09Z | 258 | 3 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | 2024-09-19T09:55:38Z | ---
base_model: black-forest-labs/FLUX.1-dev
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: https://huggingface.co/glif-loradex-trainer/fabian3000_bosch/resolve/main/samples/1726739677630__000001500_0.jpg
text: a boing 747 flying over a dark castle boschstyle
- output:
url: https://huggingface.co/glif-loradex-trainer/fabian3000_bosch/resolve/main/samples/1726739702342__000001500_1.jpg
text: a snake man eating a blueberry, text says GLIF boschstyle
- output:
url: https://huggingface.co/glif-loradex-trainer/fabian3000_bosch/resolve/main/samples/1726739727063__000001500_2.jpg
text: portrait of a cthulhu monster boschstyle
trigger: boschstyle
instance_prompt: boschstyle
---
# bosch
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `fabian3000`.
<Gallery />
## Trigger words
You should use `boschstyle` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/fabian3000_bosch/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
glif-loradex-trainer/fabian3000_vgaleaders | glif-loradex-trainer | 2024-10-11T10:47:07Z | 73 | 0 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | 2024-09-19T08:56:16Z | ---
base_model: black-forest-labs/FLUX.1-dev
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: https://huggingface.co/glif-loradex-trainer/fabian3000_vgaleaders/resolve/main/samples/1726736115338__000001500_0.jpg
text: portrait of a cthulhu monster, vgaleaders
- output:
url: https://huggingface.co/glif-loradex-trainer/fabian3000_vgaleaders/resolve/main/samples/1726736140938__000001500_1.jpg
text: a cozy hut in the woods, vgaleaders
- output:
url: https://huggingface.co/glif-loradex-trainer/fabian3000_vgaleaders/resolve/main/samples/1726736165527__000001500_2.jpg
text: a dark skeleton warrior holding a pan, vgaleaders
trigger: vgaleaders
instance_prompt: vgaleaders
---
# vgaleaders
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `fabian3000`.
<Gallery />
## Trigger words
You should use `vgaleaders` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/fabian3000_vgaleaders/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
glif-loradex-trainer/syntaxdiffusion_flux_dev_Rick_and_Morty_prodigy | glif-loradex-trainer | 2024-10-11T10:47:05Z | 31 | 0 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | 2024-09-19T06:56:29Z | ---
base_model: black-forest-labs/FLUX.1-dev
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: https://huggingface.co/glif-loradex-trainer/syntaxdiffusion_flux_dev_Rick_and_Morty_prodigy/resolve/main/samples/1726728917801__000003000_0.jpg
text: a unicorn crossing river styx WubbaLubbadubdub
- output:
url: https://huggingface.co/glif-loradex-trainer/syntaxdiffusion_flux_dev_Rick_and_Morty_prodigy/resolve/main/samples/1726728941633__000003000_1.jpg
text: a scared hamster, WubbaLubbadubdub
- output:
url: https://huggingface.co/glif-loradex-trainer/syntaxdiffusion_flux_dev_Rick_and_Morty_prodigy/resolve/main/samples/1726728965489__000003000_2.jpg
text: silver vampire sword with a golden cross, WubbaLubbadubdub
trigger: WubbaLubbadubdub
instance_prompt: WubbaLubbadubdub
---
# flux_dev_Rick_and_Morty_prodigy
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `syntaxdiffusion`.
<Gallery />
## Trigger words
You should use `WubbaLubbadubdub` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/syntaxdiffusion_flux_dev_Rick_and_Morty_prodigy/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
glif-loradex-trainer/AP123_flux_dev_power_slap_01b | glif-loradex-trainer | 2024-10-11T10:47:02Z | 20 | 0 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | 2024-09-19T05:59:12Z | ---
base_model: black-forest-labs/FLUX.1-dev
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: https://huggingface.co/glif-loradex-trainer/AP123_flux_dev_power_slap_01b/resolve/main/samples/1726725471075__000001500_0.jpg
text: wounded centaur, mythical creature sl4p
- output:
url: https://huggingface.co/glif-loradex-trainer/AP123_flux_dev_power_slap_01b/resolve/main/samples/1726725495789__000001500_1.jpg
text: a human sl4p
- output:
url: https://huggingface.co/glif-loradex-trainer/AP123_flux_dev_power_slap_01b/resolve/main/samples/1726725520387__000001500_2.jpg
text: an alien sl4p
trigger: sl4p
instance_prompt: sl4p
---
# flux_dev_power_slap_01b
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `AP123`.
<Gallery />
## Trigger words
You should use `sl4p` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/AP123_flux_dev_power_slap_01b/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
glif-loradex-trainer/saq1b_midjourney_v6_1_flux_lora | glif-loradex-trainer | 2024-10-11T10:46:51Z | 2,512 | 4 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | 2024-09-18T13:00:42Z | ---
base_model: black-forest-labs/FLUX.1-dev
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: https://huggingface.co/glif-loradex-trainer/saq1b_midjourney_v6_1_flux_lora/resolve/main/samples/1726664375645__000003000_0.jpg
text: wounded centaur, mythical creature mj-v6.1-style
- output:
url: https://huggingface.co/glif-loradex-trainer/saq1b_midjourney_v6_1_flux_lora/resolve/main/samples/1726664400259__000003000_1.jpg
text: ruins of athens, snake mj-v6.1-style
- output:
url: https://huggingface.co/glif-loradex-trainer/saq1b_midjourney_v6_1_flux_lora/resolve/main/samples/1726664424870__000003000_2.jpg
text: silver vampire sword mj-v6.1-style
trigger: mj-v6.1-style
instance_prompt: mj-v6.1-style
---
# midjourney_v6_1_flux_lora
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `saq1b`.
<Gallery />
## Trigger words
You should use `mj-v6.1-style` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/saq1b_midjourney_v6_1_flux_lora/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
glif-loradex-trainer/ampp_Minecraft_Skin | glif-loradex-trainer | 2024-10-11T10:46:42Z | 31 | 4 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | 2024-09-29T02:00:38Z | ---
base_model: black-forest-labs/FLUX.1-dev
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: https://huggingface.co/glif-loradex-trainer/ampp_Minecraft_Skin/resolve/main/samples/1727575152017__000003000_0.jpg
text: mona lisa M1n3cr4ft
- output:
url: https://huggingface.co/glif-loradex-trainer/ampp_Minecraft_Skin/resolve/main/samples/1727575162989__000003000_1.jpg
text: johnny depp M1n3cr4ft
- output:
url: https://huggingface.co/glif-loradex-trainer/ampp_Minecraft_Skin/resolve/main/samples/1727575173957__000003000_2.jpg
text: iron man M1n3cr4ft
- output:
url: https://huggingface.co/glif-loradex-trainer/ampp_Minecraft_Skin/resolve/main/samples/1727575184926__000003000_3.jpg
text: M1n3cr4ft chewbacaca
- output:
url: https://huggingface.co/glif-loradex-trainer/ampp_Minecraft_Skin/resolve/main/samples/1727575195895__000003000_4.jpg
text: M1n3cr4ft a necromancer wearing a purple and green outfit
- output:
url: https://huggingface.co/glif-loradex-trainer/ampp_Minecraft_Skin/resolve/main/samples/1727575207339__000003000_5.jpg
text: M1n3cr4ft Shrek
trigger: M1n3cr4ft
instance_prompt: M1n3cr4ft
---
# Minecraft_Skin
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `ampp`.
<Gallery />
## Trigger words
You should use `M1n3cr4ft` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/ampp_Minecraft_Skin/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
glif-loradex-trainer/ddickinson_dstyl3xl | glif-loradex-trainer | 2024-10-11T10:46:28Z | 33 | 5 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | 2024-10-10T11:41:20Z | ---
base_model: black-forest-labs/FLUX.1-dev
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: https://huggingface.co/glif-loradex-trainer/ddickinson_dstyl3xl/resolve/main/samples/1728560344025__000003000_0.jpg
text: Pilot flying a hot air balloon shaped like a teapot, pouring rain on dry lands
below in the style of dstyl3xl
- output:
url: https://huggingface.co/glif-loradex-trainer/ddickinson_dstyl3xl/resolve/main/samples/1728560368882__000003000_1.jpg
text: A gigantic clockwork heart powering a mechanical forest, with gears and springs
as leaves and branches in the style of dstyl3xl
- output:
url: https://huggingface.co/glif-loradex-trainer/ddickinson_dstyl3xl/resolve/main/samples/1728560393749__000003000_2.jpg
text: Scientist with wild hair examining a glowing crystal, bioluminescent plants
in the laboratory in the style of dstyl3xl
- output:
url: https://huggingface.co/glif-loradex-trainer/ddickinson_dstyl3xl/resolve/main/samples/1728560418604__000003000_3.jpg
text: A time-traveling DeLorean car emerging from a swirling portal in the middle
of a medieval village in the style of dstyl3xl
- output:
url: https://huggingface.co/glif-loradex-trainer/ddickinson_dstyl3xl/resolve/main/samples/1728560443459__000003000_4.jpg
text: A group of friendly aliens having a picnic on the moon, Earth visible in the
background in the style of dstyl3xl
- output:
url: https://huggingface.co/glif-loradex-trainer/ddickinson_dstyl3xl/resolve/main/samples/1728560468312__000003000_5.jpg
text: A Victorian-era robot having a tea party with stuffed animals in a sunlit
garden in the style of dstyl3xl
trigger: in the style of dstyl3xl
instance_prompt: in the style of dstyl3xl
---
# dstyl3xl
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `ddickinson`.
<Gallery />
## Trigger words
You should use `in the style of dstyl3xl` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/ddickinson_dstyl3xl/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
glif-loradex-trainer/maxxd4240_minimalistPastel | glif-loradex-trainer | 2024-10-11T10:46:26Z | 1,207 | 19 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | 2024-09-25T09:15:56Z | ---
base_model: black-forest-labs/FLUX.1-dev
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: https://huggingface.co/glif-loradex-trainer/maxxd4240_minimalistPastel/resolve/main/samples/1727255690613__000002500_0.jpg
text: wounded centaur, mythical creature minimalistPastel
- output:
url: https://huggingface.co/glif-loradex-trainer/maxxd4240_minimalistPastel/resolve/main/samples/1727255715214__000002500_1.jpg
text: ruins of athens, snake minimalistPastel
- output:
url: https://huggingface.co/glif-loradex-trainer/maxxd4240_minimalistPastel/resolve/main/samples/1727255739813__000002500_2.jpg
text: silver vampire sword minimalistPastel
trigger: minimalistPastel
instance_prompt: minimalistPastel
---
# minimalistPastel
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `maxxd4240`.
<Gallery />
## Trigger words
You should use `minimalistPastel` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/maxxd4240_minimalistPastel/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
lurker18/Starcoder2_15B_AWQ_4bit | lurker18 | 2024-10-11T10:39:15Z | 7 | 0 | null | [
"safetensors",
"starcoder2",
"license:bigcode-openrail-m",
"4-bit",
"awq",
"region:us"
] | null | 2024-10-11T09:46:12Z | ---
license: bigcode-openrail-m
---
|
cryotron/q4_k_m_academic | cryotron | 2024-10-11T10:30:43Z | 21 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-11T10:25:30Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** cryotron
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sania963/sql_v5 | sania963 | 2024-10-11T10:29:50Z | 76 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-10-11T10:27:42Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
qordmlwls/Llama-3.1-medical-v2 | qordmlwls | 2024-10-11T10:27:04Z | 13 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"base_model:quantized:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-11T10:03:53Z | ---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** qordmlwls
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Alignment-Lab-AI/GPT-2-optim | Alignment-Lab-AI | 2024-10-11T09:48:45Z | 125 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"en",
"dataset:HuggingFaceFW/fineweb-edu",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-11T09:28:22Z | ---
datasets: HuggingFaceFW/fineweb-edu
widget:
- example_title: Example interaction
text: During photosynthesis in green plants
inference:
parameters:
repetition_penalty: 1.3
language:
- en
library_name: transformers
license: mit
---
# Model Card for gpt2-124M-edu-fineweb-10B
A 124M parameter GPT2 model trained with the 10B fineweb-edu dataset using [https://github.com/karpathy/llm.c](https://github.com/karpathy/llm.c)
Training took 20 hours on a single 4090 GPU (limited to 350W), giving the following graphs:

## Training
The training parameters where:
```
./train_gpt2cu \
-i "dev/data/edu_fineweb10B/edu_fineweb_train_*.bin" \
-j "dev/data/edu_fineweb10B/edu_fineweb_val_*.bin" \
-o log124M \
-e "d12" \
-b 56 -t 1024 \
-d 458752 \
-r 1 \
-z 1 \
-c 0.1 \
-l 0.002 \
-q 0.0 \
-u 700 \
-n 5000 \
-v 250 -s 20000 \
-h 1
```
The model has had no further finetuning.
## Evaluation
Evals using [Eleuther AI Harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/b281b0921b636bc36ad05c0b0b0763bd6dd43463)
as described in the [open_llm_leaderboard](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
and comparing with those published for [openai-community/gpt2](https://huggingface.co/openai-community/gpt2)

| Eval Test | Score |
| --------- | ----- |
| arc_challenge (25 shot) | 24.49 |
| gsm8k (5 shot) | 0.08 |
| hellaswag (10 shot) | 32.64 |
| mmlu (5 shot) | 26.06 |
| truthfulqa (0 shot) | 42.45 |
| winogrande (5 shot) | 52.17 |
| **Overall Score** | **29.65** | |
OHCHANG/distilbert-base-uncased-finetuned-emotion | OHCHANG | 2024-10-11T09:47:51Z | 117 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-11T09:27:48Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2016
- Accuracy: 0.9295
- F1: 0.9294
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8132 | 1.0 | 250 | 0.2923 | 0.908 | 0.9082 |
| 0.236 | 2.0 | 500 | 0.2016 | 0.9295 | 0.9294 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.1+cu124
- Datasets 3.0.1
- Tokenizers 0.20.0
|
RichardErkhov/cloudyu_-_Yi-34Bx2-MoE-60B-DPO-gguf | RichardErkhov | 2024-10-11T09:47:38Z | 5 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-10T18:44:08Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Yi-34Bx2-MoE-60B-DPO - GGUF
- Model creator: https://huggingface.co/cloudyu/
- Original model: https://huggingface.co/cloudyu/Yi-34Bx2-MoE-60B-DPO/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Yi-34Bx2-MoE-60B-DPO.Q2_K.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Yi-34Bx2-MoE-60B-DPO-gguf/blob/main/Yi-34Bx2-MoE-60B-DPO.Q2_K.gguf) | Q2_K | 20.85GB |
| [Yi-34Bx2-MoE-60B-DPO.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Yi-34Bx2-MoE-60B-DPO-gguf/blob/main/Yi-34Bx2-MoE-60B-DPO.IQ3_XS.gguf) | IQ3_XS | 23.26GB |
| [Yi-34Bx2-MoE-60B-DPO.IQ3_S.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Yi-34Bx2-MoE-60B-DPO-gguf/blob/main/Yi-34Bx2-MoE-60B-DPO.IQ3_S.gguf) | IQ3_S | 2.54GB |
| [Yi-34Bx2-MoE-60B-DPO.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Yi-34Bx2-MoE-60B-DPO-gguf/blob/main/Yi-34Bx2-MoE-60B-DPO.Q3_K_S.gguf) | Q3_K_S | 24.51GB |
| [Yi-34Bx2-MoE-60B-DPO.IQ3_M.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Yi-34Bx2-MoE-60B-DPO-gguf/blob/main/Yi-34Bx2-MoE-60B-DPO.IQ3_M.gguf) | IQ3_M | 25.2GB |
| [Yi-34Bx2-MoE-60B-DPO.Q3_K.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Yi-34Bx2-MoE-60B-DPO-gguf/blob/main/Yi-34Bx2-MoE-60B-DPO.Q3_K.gguf) | Q3_K | 27.23GB |
| [Yi-34Bx2-MoE-60B-DPO.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Yi-34Bx2-MoE-60B-DPO-gguf/blob/main/Yi-34Bx2-MoE-60B-DPO.Q3_K_M.gguf) | Q3_K_M | 27.23GB |
| [Yi-34Bx2-MoE-60B-DPO.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Yi-34Bx2-MoE-60B-DPO-gguf/blob/main/Yi-34Bx2-MoE-60B-DPO.Q3_K_L.gguf) | Q3_K_L | 29.58GB |
| [Yi-34Bx2-MoE-60B-DPO.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Yi-34Bx2-MoE-60B-DPO-gguf/blob/main/Yi-34Bx2-MoE-60B-DPO.IQ4_XS.gguf) | IQ4_XS | 30.58GB |
| [Yi-34Bx2-MoE-60B-DPO.Q4_0.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Yi-34Bx2-MoE-60B-DPO-gguf/blob/main/Yi-34Bx2-MoE-60B-DPO.Q4_0.gguf) | Q4_0 | 31.97GB |
| [Yi-34Bx2-MoE-60B-DPO.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Yi-34Bx2-MoE-60B-DPO-gguf/blob/main/Yi-34Bx2-MoE-60B-DPO.IQ4_NL.gguf) | IQ4_NL | 24.73GB |
| [Yi-34Bx2-MoE-60B-DPO.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Yi-34Bx2-MoE-60B-DPO-gguf/blob/main/Yi-34Bx2-MoE-60B-DPO.Q4_K_S.gguf) | Q4_K_S | 12.69GB |
| [Yi-34Bx2-MoE-60B-DPO.Q4_K.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Yi-34Bx2-MoE-60B-DPO-gguf/blob/main/Yi-34Bx2-MoE-60B-DPO.Q4_K.gguf) | Q4_K | 34.14GB |
| [Yi-34Bx2-MoE-60B-DPO.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Yi-34Bx2-MoE-60B-DPO-gguf/blob/main/Yi-34Bx2-MoE-60B-DPO.Q4_K_M.gguf) | Q4_K_M | 34.14GB |
| [Yi-34Bx2-MoE-60B-DPO.Q4_1.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Yi-34Bx2-MoE-60B-DPO-gguf/blob/main/Yi-34Bx2-MoE-60B-DPO.Q4_1.gguf) | Q4_1 | 35.49GB |
| [Yi-34Bx2-MoE-60B-DPO.Q5_0.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Yi-34Bx2-MoE-60B-DPO-gguf/tree/main/) | Q5_0 | 39.0GB |
| [Yi-34Bx2-MoE-60B-DPO.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Yi-34Bx2-MoE-60B-DPO-gguf/tree/main/) | Q5_K_S | 39.0GB |
| [Yi-34Bx2-MoE-60B-DPO.Q5_K.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Yi-34Bx2-MoE-60B-DPO-gguf/tree/main/) | Q5_K | 40.12GB |
| [Yi-34Bx2-MoE-60B-DPO.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Yi-34Bx2-MoE-60B-DPO-gguf/tree/main/) | Q5_K_M | 40.12GB |
| [Yi-34Bx2-MoE-60B-DPO.Q5_1.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Yi-34Bx2-MoE-60B-DPO-gguf/tree/main/) | Q5_1 | 42.51GB |
| [Yi-34Bx2-MoE-60B-DPO.Q6_K.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Yi-34Bx2-MoE-60B-DPO-gguf/tree/main/) | Q6_K | 46.47GB |
| [Yi-34Bx2-MoE-60B-DPO.Q8_0.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Yi-34Bx2-MoE-60B-DPO-gguf/tree/main/) | Q8_0 | 60.18GB |
Original model description:
---
license: apache-2.0
tags:
- yi
- moe
- DPO
model-index:
- name: Yi-34Bx2-MoE-60B-DPO
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 53.19
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=cloudyu/Yi-34Bx2-MoE-60B-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 31.26
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=cloudyu/Yi-34Bx2-MoE-60B-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 6.19
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=cloudyu/Yi-34Bx2-MoE-60B-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 9.62
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=cloudyu/Yi-34Bx2-MoE-60B-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 14.32
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=cloudyu/Yi-34Bx2-MoE-60B-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 40.85
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=cloudyu/Yi-34Bx2-MoE-60B-DPO
name: Open LLM Leaderboard
---
* [This is DPO improved version of cloudyu/Yi-34Bx2-MoE-60B](https://huggingface.co/cloudyu/Yi-34Bx2-MoE-60B)
* [DPO Trainer](https://huggingface.co/docs/trl/main/en/dpo_trainer)
* metrics not test!
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_cloudyu__Yi-34Bx2-MoE-60B-DPO)
| Metric |Value|
|-------------------|----:|
|Avg. |25.91|
|IFEval (0-Shot) |53.19|
|BBH (3-Shot) |31.26|
|MATH Lvl 5 (4-Shot)| 6.19|
|GPQA (0-shot) | 9.62|
|MuSR (0-shot) |14.32|
|MMLU-PRO (5-shot) |40.85|
|
b09501048/detr | b09501048 | 2024-10-11T09:44:41Z | 219 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | 2024-10-10T08:52:26Z | ---
library_name: transformers
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
model-index:
- name: detr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/djengo890-national-taiwan-university/CVDPL_HW1_DETR/runs/gdqqmrof)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/djengo890-national-taiwan-university/CVDPL_HW1_DETR/runs/gdqqmrof)
# detr
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1588
- Map: 0.0662
- Map 50: 0.1137
- Map 75: 0.0667
- Map Small: 0.0
- Map Medium: 0.0013
- Map Large: 0.0706
- Mar 1: 0.0705
- Mar 10: 0.1303
- Mar 100: 0.1374
- Mar Small: 0.0
- Mar Medium: 0.009
- Mar Large: 0.1536
- Map Person: 0.5545
- Mar 100 Person: 0.6988
- Map Ear: 0.0068
- Mar 100 Ear: 0.1599
- Map Earmuffs: 0.0
- Mar 100 Earmuffs: 0.0
- Map Face: 0.1494
- Mar 100 Face: 0.3924
- Map Face-guard: 0.0
- Mar 100 Face-guard: 0.0
- Map Face-mask-medical: 0.0
- Mar 100 Face-mask-medical: 0.0
- Map Foot: 0.0
- Mar 100 Foot: 0.0
- Map Tools: 0.0012
- Mar 100 Tools: 0.0722
- Map Glasses: 0.0
- Mar 100 Glasses: 0.0
- Map Gloves: 0.0
- Mar 100 Gloves: 0.0
- Map Helmet: 0.0
- Mar 100 Helmet: 0.0
- Map Hands: 0.1315
- Mar 100 Hands: 0.4111
- Map Head: 0.274
- Mar 100 Head: 0.5671
- Map Medical-suit: 0.0
- Mar 100 Medical-suit: 0.0
- Map Shoes: 0.0073
- Mar 100 Shoes: 0.035
- Map Safety-suit: 0.0
- Mar 100 Safety-suit: 0.0
- Map Safety-vest: 0.0
- Mar 100 Safety-vest: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Person | Mar 100 Person | Map Ear | Mar 100 Ear | Map Earmuffs | Mar 100 Earmuffs | Map Face | Mar 100 Face | Map Face-guard | Mar 100 Face-guard | Map Face-mask-medical | Mar 100 Face-mask-medical | Map Foot | Mar 100 Foot | Map Tools | Mar 100 Tools | Map Glasses | Mar 100 Glasses | Map Gloves | Mar 100 Gloves | Map Helmet | Mar 100 Helmet | Map Hands | Mar 100 Hands | Map Head | Mar 100 Head | Map Medical-suit | Mar 100 Medical-suit | Map Shoes | Mar 100 Shoes | Map Safety-suit | Mar 100 Safety-suit | Map Safety-vest | Mar 100 Safety-vest |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:----------:|:--------------:|:-------:|:-----------:|:------------:|:----------------:|:--------:|:------------:|:--------------:|:------------------:|:---------------------:|:-------------------------:|:--------:|:------------:|:---------:|:-------------:|:-----------:|:---------------:|:----------:|:--------------:|:----------:|:--------------:|:---------:|:-------------:|:--------:|:------------:|:----------------:|:--------------------:|:---------:|:-------------:|:---------------:|:-------------------:|:---------------:|:-------------------:|
| No log | 0.9913 | 57 | 2.9995 | 0.0253 | 0.0391 | 0.0271 | 0.0 | 0.0001 | 0.0263 | 0.0319 | 0.0569 | 0.0684 | 0.0 | 0.0005 | 0.0725 | 0.3757 | 0.6561 | 0.0002 | 0.0046 | 0.0 | 0.0 | 0.0006 | 0.0042 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0211 | 0.3065 | 0.0333 | 0.1919 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 2.0 | 115 | 2.6946 | 0.0312 | 0.0518 | 0.0323 | 0.0 | 0.0002 | 0.0327 | 0.0411 | 0.0775 | 0.0874 | 0.0 | 0.003 | 0.094 | 0.42 | 0.6762 | 0.0007 | 0.0246 | 0.0 | 0.0 | 0.0053 | 0.0515 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0373 | 0.3427 | 0.0669 | 0.3916 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 2.9913 | 172 | 2.5527 | 0.0325 | 0.0595 | 0.0308 | 0.0 | 0.002 | 0.0343 | 0.0441 | 0.0853 | 0.0961 | 0.0 | 0.0057 | 0.1058 | 0.3649 | 0.652 | 0.0021 | 0.0798 | 0.0 | 0.0 | 0.0142 | 0.0748 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0022 | 0.0073 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0508 | 0.3384 | 0.1178 | 0.482 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 4.0 | 230 | 2.4637 | 0.0425 | 0.0788 | 0.0396 | 0.0 | 0.0006 | 0.0449 | 0.0503 | 0.0968 | 0.1059 | 0.0 | 0.0049 | 0.1168 | 0.451 | 0.6564 | 0.0018 | 0.0867 | 0.0 | 0.0 | 0.0548 | 0.1764 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0017 | 0.0225 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0725 | 0.3554 | 0.1402 | 0.5027 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 4.9913 | 287 | 2.3455 | 0.0527 | 0.0929 | 0.052 | 0.0 | 0.0006 | 0.0561 | 0.0625 | 0.1154 | 0.1234 | 0.0 | 0.0042 | 0.1375 | 0.4924 | 0.6771 | 0.0068 | 0.1346 | 0.0 | 0.0 | 0.1061 | 0.3505 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0011 | 0.0278 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0916 | 0.3802 | 0.1918 | 0.5232 | 0.0 | 0.0 | 0.0058 | 0.0047 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 6.0 | 345 | 2.2859 | 0.0538 | 0.0938 | 0.0534 | 0.0 | 0.0011 | 0.0572 | 0.0624 | 0.1175 | 0.1268 | 0.0 | 0.0051 | 0.1412 | 0.5106 | 0.6945 | 0.0037 | 0.1284 | 0.0 | 0.0 | 0.1108 | 0.3333 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0006 | 0.0503 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1048 | 0.4016 | 0.1793 | 0.5394 | 0.0 | 0.0 | 0.0055 | 0.0088 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 6.9913 | 402 | 2.2126 | 0.0609 | 0.1056 | 0.0596 | 0.0 | 0.0019 | 0.065 | 0.0681 | 0.1257 | 0.1319 | 0.0 | 0.0086 | 0.1469 | 0.5288 | 0.6915 | 0.0064 | 0.1412 | 0.0 | 0.0 | 0.1366 | 0.3886 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0006 | 0.0517 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1204 | 0.3905 | 0.2371 | 0.5637 | 0.0 | 0.0 | 0.0055 | 0.0154 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 8.0 | 460 | 2.1794 | 0.0641 | 0.11 | 0.0643 | 0.0 | 0.0015 | 0.0685 | 0.0699 | 0.1294 | 0.1372 | 0.0 | 0.0089 | 0.1533 | 0.5525 | 0.6964 | 0.0064 | 0.1599 | 0.0 | 0.0 | 0.1375 | 0.3892 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0012 | 0.0775 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1273 | 0.4089 | 0.2586 | 0.567 | 0.0 | 0.0 | 0.007 | 0.0333 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.8481 | 8.9913 | 517 | 2.1642 | 0.0658 | 0.113 | 0.0661 | 0.0 | 0.0016 | 0.0703 | 0.0702 | 0.1296 | 0.1369 | 0.0 | 0.0091 | 0.1532 | 0.5529 | 0.6969 | 0.0067 | 0.1606 | 0.0 | 0.0 | 0.1475 | 0.3875 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0012 | 0.0725 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.13 | 0.4102 | 0.2727 | 0.5659 | 0.0 | 0.0 | 0.0073 | 0.0344 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.8481 | 9.9130 | 570 | 2.1588 | 0.0662 | 0.1137 | 0.0667 | 0.0 | 0.0013 | 0.0706 | 0.0705 | 0.1303 | 0.1374 | 0.0 | 0.009 | 0.1536 | 0.5545 | 0.6988 | 0.0068 | 0.1599 | 0.0 | 0.0 | 0.1494 | 0.3924 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0012 | 0.0722 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1315 | 0.4111 | 0.274 | 0.5671 | 0.0 | 0.0 | 0.0073 | 0.035 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
Kvisten/abisko-lite-3-v2 | Kvisten | 2024-10-11T09:44:16Z | 6 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:black-forest-labs/FLUX.1-schnell",
"base_model:adapter:black-forest-labs/FLUX.1-schnell",
"license:apache-2.0",
"region:us"
] | text-to-image | 2024-10-11T09:44:09Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
widget:
- text: A light blue abisko-lite-3 tent set up on a rocky mountain ridge, surrounded
by rugged terrain and distant snow-capped peaks. The tent's guy lines are
secured to nearby rocks, and the fabric flutters slightly in the breeze. A
few scattered wildflowers can be seen growing between the rocks.
output:
url: samples/1728639798907__000003500_0.jpg
- text: A green Abisko lite 3 tent pitched in a dense forest clearing. Tall pine
trees surround the tent, and dappled sunlight filters through the branches,
casting soft shadows on the tent's surface. Fallen leaves and moss cover the
ground, adding a touch of green to the scene.
output:
url: samples/1728639807671__000003500_1.jpg
- text: A olive-green abisko-lite-3 tent set up in an open field covered in fresh
snow. The tent is surrounded by tall pine trees dusted with snow, and the
sky is overcast, with light snowflakes gently falling. The tent's guy lines
are staked firmly into the snow, and a few animal tracks are visible nearby.
output:
url: samples/1728639816434__000003500_2.jpg
- text: A 'ABISKO LITE 3' tent, olive-green color, pitched on a grassy hillside,
with its streamlined, tunnel-shaped design tapering down from the higher front
to the lower rear for better wind resistance. The tent's extended front vestibule
provides sheltered storage space, with the zippered door partially open to
reveal the interior. Multiple guy lines are secured tightly to the ground,
keeping the fabric taut, while arched poles maintain the tent's aerodynamic
shape. The rainfly extends close to the ground for full coverage, and ventilation
flaps at both the front and rear ensure optimal airflow. The surrounding grass
sways gently in the breeze.
output:
url: samples/1728639825196__000003500_3.jpg
- text: A 'Abisko lite 3' tent, olive-green color, pitched on a grassy hillside,
with its streamlined, tunnel-shaped design tapering down from the higher front
to the lower rear for better wind resistance. The tent's extended front vestibule
provides sheltered storage space, with the zippered door partially open to
reveal the interior. Multiple guy lines are secured tightly to the ground,
keeping the fabric taut, while arched poles maintain the tent's aerodynamic
shape. The rainfly extends close to the ground for full coverage, and ventilation
flaps at both the front and rear ensure optimal airflow. The surrounding grass
sways gently in the breeze.
output:
url: samples/1728639833958__000003500_4.jpg
- text: Create a high-quality commercial image of a light blue 'Abisko Lite 3' tent
set up in the middle of Times Square in New York City during the evening.
The tent features a tunnel-style design with a light blue rainfly that fully
covers the structure, tapering from the front to the rear for enhanced wind
resistance. The extended front vestibule is closed, and the arched poles create
a streamlined shape, while the rear slopes downward to optimize aerodynamics.
Surround the tent with the iconic bright neon lights from the billboards,
casting colorful reflections onto the tent's surface. Capture the contrast
between the outdoor gear and the lively urban setting, using a combination
of evening light and vibrant neon glow to emphasize the tent details and the
bustling atmosphere of Times Square.
output:
url: samples/1728639842723__000003500_5.jpg
base_model: black-forest-labs/FLUX.1-schnell
instance_prompt: ABISKO LITE 3
license: apache-2.0
---
# abisko_lite_3_v2
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
<Gallery />
## Trigger words
You should use `ABISKO LITE 3` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](/Kvisten/abisko-lite-3-v2/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-schnell', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('Kvisten/abisko-lite-3-v2', weight_name='abisko_lite_3_v2.safetensors')
image = pipeline('A light blue abisko-lite-3 tent set up on a rocky mountain ridge, surrounded by rugged terrain and distant snow-capped peaks. The tent's guy lines are secured to nearby rocks, and the fabric flutters slightly in the breeze. A few scattered wildflowers can be seen growing between the rocks.').images[0]
image.save("my_image.png")
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
mradermacher/CursorCore-Yi-9B-GGUF | mradermacher | 2024-10-11T09:36:07Z | 44 | 0 | transformers | [
"transformers",
"gguf",
"code",
"en",
"base_model:TechxGenus/CursorCore-Yi-9B",
"base_model:quantized:TechxGenus/CursorCore-Yi-9B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-11T06:28:50Z | ---
base_model: TechxGenus/CursorCore-Yi-9B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- code
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/TechxGenus/CursorCore-Yi-9B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/CursorCore-Yi-9B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-9B-GGUF/resolve/main/CursorCore-Yi-9B.Q2_K.gguf) | Q2_K | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-9B-GGUF/resolve/main/CursorCore-Yi-9B.Q3_K_S.gguf) | Q3_K_S | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-9B-GGUF/resolve/main/CursorCore-Yi-9B.Q3_K_M.gguf) | Q3_K_M | 4.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-9B-GGUF/resolve/main/CursorCore-Yi-9B.Q3_K_L.gguf) | Q3_K_L | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-9B-GGUF/resolve/main/CursorCore-Yi-9B.IQ4_XS.gguf) | IQ4_XS | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-9B-GGUF/resolve/main/CursorCore-Yi-9B.Q4_K_S.gguf) | Q4_K_S | 5.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-9B-GGUF/resolve/main/CursorCore-Yi-9B.Q4_K_M.gguf) | Q4_K_M | 5.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-9B-GGUF/resolve/main/CursorCore-Yi-9B.Q5_K_S.gguf) | Q5_K_S | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-9B-GGUF/resolve/main/CursorCore-Yi-9B.Q5_K_M.gguf) | Q5_K_M | 6.4 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-9B-GGUF/resolve/main/CursorCore-Yi-9B.Q6_K.gguf) | Q6_K | 7.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-9B-GGUF/resolve/main/CursorCore-Yi-9B.Q8_0.gguf) | Q8_0 | 9.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-9B-GGUF/resolve/main/CursorCore-Yi-9B.f16.gguf) | f16 | 17.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/CursorCore-Yi-9B-i1-GGUF | mradermacher | 2024-10-11T09:36:07Z | 48 | 0 | transformers | [
"transformers",
"gguf",
"code",
"en",
"base_model:TechxGenus/CursorCore-Yi-9B",
"base_model:quantized:TechxGenus/CursorCore-Yi-9B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-11T08:12:08Z | ---
base_model: TechxGenus/CursorCore-Yi-9B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- code
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/TechxGenus/CursorCore-Yi-9B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/CursorCore-Yi-9B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-9B-i1-GGUF/resolve/main/CursorCore-Yi-9B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-9B-i1-GGUF/resolve/main/CursorCore-Yi-9B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-9B-i1-GGUF/resolve/main/CursorCore-Yi-9B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-9B-i1-GGUF/resolve/main/CursorCore-Yi-9B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-9B-i1-GGUF/resolve/main/CursorCore-Yi-9B.i1-IQ2_S.gguf) | i1-IQ2_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-9B-i1-GGUF/resolve/main/CursorCore-Yi-9B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-9B-i1-GGUF/resolve/main/CursorCore-Yi-9B.i1-Q2_K.gguf) | i1-Q2_K | 3.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-9B-i1-GGUF/resolve/main/CursorCore-Yi-9B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-9B-i1-GGUF/resolve/main/CursorCore-Yi-9B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-9B-i1-GGUF/resolve/main/CursorCore-Yi-9B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-9B-i1-GGUF/resolve/main/CursorCore-Yi-9B.i1-IQ3_S.gguf) | i1-IQ3_S | 4.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-9B-i1-GGUF/resolve/main/CursorCore-Yi-9B.i1-IQ3_M.gguf) | i1-IQ3_M | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-9B-i1-GGUF/resolve/main/CursorCore-Yi-9B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-9B-i1-GGUF/resolve/main/CursorCore-Yi-9B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-9B-i1-GGUF/resolve/main/CursorCore-Yi-9B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-9B-i1-GGUF/resolve/main/CursorCore-Yi-9B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 5.1 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-9B-i1-GGUF/resolve/main/CursorCore-Yi-9B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 5.1 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-9B-i1-GGUF/resolve/main/CursorCore-Yi-9B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 5.1 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-9B-i1-GGUF/resolve/main/CursorCore-Yi-9B.i1-Q4_0.gguf) | i1-Q4_0 | 5.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-9B-i1-GGUF/resolve/main/CursorCore-Yi-9B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-9B-i1-GGUF/resolve/main/CursorCore-Yi-9B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-9B-i1-GGUF/resolve/main/CursorCore-Yi-9B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-9B-i1-GGUF/resolve/main/CursorCore-Yi-9B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.4 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-9B-i1-GGUF/resolve/main/CursorCore-Yi-9B.i1-Q6_K.gguf) | i1-Q6_K | 7.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
hieundx/distilbert-base-uncased-distilled-clinc | hieundx | 2024-10-11T09:21:41Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-11T09:18:52Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3650
- Accuracy: 0.9448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 2.7041 | 0.7403 |
| 3.157 | 2.0 | 636 | 1.3959 | 0.8613 |
| 3.157 | 3.0 | 954 | 0.7574 | 0.9135 |
| 1.2213 | 4.0 | 1272 | 0.5136 | 0.9268 |
| 0.4979 | 5.0 | 1590 | 0.4219 | 0.9394 |
| 0.4979 | 6.0 | 1908 | 0.3916 | 0.9410 |
| 0.3031 | 7.0 | 2226 | 0.3708 | 0.9452 |
| 0.2444 | 8.0 | 2544 | 0.3666 | 0.9452 |
| 0.2444 | 9.0 | 2862 | 0.3650 | 0.9448 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Tokenizers 0.19.1
|
neighborwang/ModeliCo-8B | neighborwang | 2024-10-11T09:19:36Z | 13 | 2 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"peft",
"text2text-generation",
"en",
"dataset:neighborwang/modelica_libraries",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-09-28T08:27:47Z | ---
base_model: meta-llama/Llama-3.1-8B-Instruct
datasets:
- neighborwang/modelica_libraries
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: text2text-generation
tags:
- text-generation-inference
- text-generation
- peft
---
# ModeliCo
A cutting-edge LLM designed to empower users of all skill levels to effortlessly model complex systems and unlock the full potential of Modelica. |
yash3056/Llama-3.2-1B-imdb | yash3056 | 2024-10-11T09:17:24Z | 6,845 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-classification",
"pytorch",
"llama-3",
"1b",
"en",
"dataset:stanfordnlp/imdb",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-04T14:00:04Z | ---
license: llama3.2
datasets:
- stanfordnlp/imdb
language:
- en
metrics:
- accuracy
base_model:
- meta-llama/Llama-3.2-1B
new_version: yash3056/Llama-3.2-1B-imdb
pipeline_tag: text-classification
library_name: transformers
tags:
- transformers
- pytorch
- llama
- llama-3
- 1b
---
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Funded by [Intel]:** [https://console.cloud.intel.com/]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** Text Classification
- **Language(s) (NLP):** [More Information Needed]
- **License:** [Llama 3.2 Community License Agreement]
- **Finetuned from model [meta-llama/Llama-3.2-1B]:** [https://huggingface.co/meta-llama/Llama-3.2-1B]
## Uses
This model is designed for text classification tasks, specifically for binary sentiment analysis on datasets like IMDb, where the goal is to classify text as positive or negative. It can be used by data scientists, researchers, and developers to build applications for sentiment analysis, content moderation, or customer feedback analysis. The model can be fine-tuned for other binary or multi-class classification tasks in domains like social media monitoring, product reviews, and support ticket triage. Foreseeable users include AI researchers, developers, and businesses looking to automate text analysis at scale.
### Direct Use
This model can be used directly to identify sentiments from text-based reviews, such as classifying whether a movie or product review is positive or negative. Without any further fine-tuning, it performs well on binary sentiment analysis tasks and can be employed out of the box for various applications like analyzing customer feedback, monitoring social media opinions, or automating sentiment tagging. The model is ideal for scenarios where sentiment needs to be quickly assessed from textual input without the need for deeper customizations.
### Downstream Use
*Fine-tuning for Binary Classification*
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, Trainer, TrainingArguments
from datasets import load_dataset
# Load IMDb dataset for binary classification
dataset = load_dataset("imdb")
tokenizer = AutoTokenizer.from_pretrained("yash3056/Llama-3.2-1B-imdb")
# Tokenize the dataset
def preprocess(example):
return tokenizer(example['text'], truncation=True, padding='max_length', max_length=128)
tokenized_datasets = dataset.map(preprocess, batched=True)
# Load model for binary classification (num_labels=2)
model = AutoModelForSequenceClassification.from_pretrained("yash3056/Llama-3.2-1B-imdb", num_labels=2)
# Training arguments
training_args = TrainingArguments(
output_dir="./results",
evaluation_strategy="epoch",
learning_rate=2e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
num_train_epochs=3,
weight_decay=0.01,
)
# Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["test"],
)
# Fine-tune the model
trainer.train()
```
*Fine-tuning for Multi-Class Classification*
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, Trainer, TrainingArguments
from datasets import load_dataset
# Load AG News dataset for multi-class classification (4 labels)
dataset = load_dataset("ag_news")
tokenizer = AutoTokenizer.from_pretrained("yash3056/Llama-3.2-1B-imdb")
# Tokenize the dataset
def preprocess(example):
return tokenizer(example['text'], truncation=True, padding='max_length', max_length=128)
tokenized_datasets = dataset.map(preprocess, batched=True)
# Load model for multi-class classification (num_labels=4)
model = AutoModelForSequenceClassification.from_pretrained("yash3056/Llama-3.2-1B-imdb", num_labels=4)
# Training arguments
training_args = TrainingArguments(
output_dir="./results",
evaluation_strategy="epoch",
learning_rate=2e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
num_train_epochs=3,
weight_decay=0.01,
)
# Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["test"],
)
# Fine-tune the model
trainer.train()
```
<!--
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
-->
## Bias, Risks, and Limitations
While this model is effective for text classification and sentiment analysis, it has certain limitations and potential biases. The training data, such as the IMDb dataset, may contain inherent biases related to language use, cultural context, or demographics of reviewers, which could influence the model’s predictions. For example, the model might struggle with nuanced sentiment, sarcasm, or slang, leading to misclassifications. Additionally, it could exhibit biases toward particular opinions or groups if those were overrepresented or underrepresented in the training data.
The model is also limited to binary sentiment classification, meaning it may oversimplify more complex emotional states expressed in text. Users should be cautious when applying the model in sensitive domains such as legal, medical, or psychological settings, where misclassification could have serious consequences. Proper review and adjustment of predictions are recommended, especially in high-stakes applications.
### Recommendations
Users (both direct and downstream) should be aware of the potential risks, biases, and limitations inherent in this model. Given that the model may reflect biases present in the training data, it is recommended that users critically evaluate the model’s performance on specific datasets or contexts where fairness and accuracy are essential.
For applications in sensitive areas like legal, healthcare, or hiring decisions, additional care should be taken to review the model's predictions, possibly combining them with human oversight. Fine-tuning the model on domain-specific data or implementing bias mitigation techniques can help reduce unintended bias. Additionally, regular re-evaluation and monitoring of the model in production environments are encouraged to ensure it continues to meet desired ethical and performance standards.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
# Load Model and tokenizers
tokenizer = AutoTokenizer.from_pretrained("yash3056/Llama-3.2-1B-imdb")
model = AutoModelForSequenceClassification.from_pretrained("yash3056/Llama-3.2-1B-imdb", num_labels=n) #n is the number of labels in the code
```
## Training Details
### Training Data
The model was trained on the IMDb dataset, a widely used benchmark for binary sentiment classification tasks. The dataset consists of movie reviews labeled as positive or negative, making it suitable for training models to understand sentiment in text. The dataset contains 50,000 reviews in total, evenly split between positive and negative labels, providing a balanced dataset for training and evaluation. Preprocessing involved tokenizing the text using the AutoTokenizer from Hugging Face's Transformers library, truncating and padding the sequences to a maximum length of 512 tokens. The training data was further split into training and validation sets with an 80-20 ratio.
More information about the IMDb dataset can be found [here](https://huggingface.co/datasets/stanfordnlp/imdb).
### Training Procedure
Training Procedure
The training procedure used the Llama-3.2-1B model with modifications to suit the binary sentiment classification task. Training was performed for 10 epochs using a batch size of 8 and the AdamW optimizer with a learning rate of 3e-5. The learning rate was adjusted with a linear schedule, including a warmup of 40% of the total steps. The model was fine-tuned using the IMDb training dataset and evaluated on a separate test set.
Validation and evaluation metrics were calculated after each epoch, including accuracy, precision, recall, F1-score, and ROC-AUC. The final model was saved after the last epoch, along with the tokenizer. Several plots, such as loss curves, accuracy curves, confusion matrix, and ROC curve, were generated to visually assess the model's performance.
#### Preprocessing [optional]
Text data was preprocessed by tokenizing with the Llama-3.2-1B model tokenizer. Sequences were truncated and padded to a maximum length of 512 tokens to ensure consistent input sizes for the model. Labels were encoded as integers (0 for negative and 1 for positive) for compatibility with the model.
<!--
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
<!--
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
<!--
[More Information Needed]
-->
## Evaluation
Training Loss: 0.0030, Accuracy: 0.9999
Validation Loss: 0.1196, Accuracy: 0.9628
### Testing Data, Factors & Metrics
#### Testing Data
Test Loss: 0.1315
Test Accuracy: 0.9604
Precision: 0.9604
Recall: 0.9604
F1-score: 0.9604
AUC: 0.9604
<!--
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
<!--
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
<!--
[More Information Needed]
### Results
[More Information Needed]-->
#### Summary
<!--
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
<!--
[More Information Needed]
<!--
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
<!--
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
-->
## Technical Specifications
<!--
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
-->
#### Hardware
[Intel® Data Center GPU Max 1550](https://www.intel.com/content/www/us/en/products/sku/232873/intel-data-center-gpu-max-1550/specifications.html)
<!--
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
<!--
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
-->
## Model Card Authors
-Yash Prakash Narayan ([github](https://github.com/yash3056))
<!--
## Model Card Contact
[More Information Needed]--> |
Viscoke/c10 | Viscoke | 2024-10-11T08:43:52Z | 34 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-11T08:39:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SandeepCHV/dialogueSummary | SandeepCHV | 2024-10-11T08:30:55Z | 104 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"base_model:facebook/bart-large-cnn",
"base_model:finetune:facebook/bart-large-cnn",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-10-11T08:02:46Z | ---
library_name: transformers
license: mit
base_model: facebook/bart-large-cnn
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: dialogueSummary
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dialogueSummary
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3019
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.8939 | 1.0 | 19 | 1.3019 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
Viscoke/c9 | Viscoke | 2024-10-11T08:30:49Z | 33 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-11T08:26:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gabrielloiseau/LUAR-CRUD-sentence-transformers | gabrielloiseau | 2024-10-11T08:30:29Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"LUAR",
"en",
"base_model:rrivera1849/LUAR-CRUD",
"base_model:finetune:rrivera1849/LUAR-CRUD",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-10-11T08:28:10Z | ---
base_model:
- rrivera1849/LUAR-CRUD
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- LUAR
license: apache-2.0
language:
- en
---
# SentenceTransformer version of rrivera1849/LUAR-MUD
All credits go to [(Rivera-Soto et al. 2021)](https://aclanthology.org/2021.emnlp-main.70/)
---
Author Style Representations using [LUAR](https://aclanthology.org/2021.emnlp-main.70.pdf).
The LUAR training and evaluation repository can be found [here](https://github.com/llnl/luar).
This model was trained on a subsample of the Pushshift Reddit Dataset (5 million users) for comments published between January 2015 and October 2019 by authors publishing at least 100 comments during that period.
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("gabrielloiseau/LUAR-CRUD-sentence-transformers")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 512]
```
## Citation
If you find this model helpful, feel free to cite:
```
@inproceedings{uar-emnlp2021,
author = {Rafael A. Rivera Soto and Olivia Miano and Juanita Ordonez and Barry Chen and Aleem Khan and Marcus Bishop and Nicholas Andrews},
title = {Learning Universal Authorship Representations},
booktitle = {EMNLP},
year = {2021},
}
```
## License
LUAR is distributed under the terms of the Apache License (Version 2.0).
All new contributions must be made under the Apache-2.0 licenses. |
livehao/bert_classification_model | livehao | 2024-10-11T08:17:48Z | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-11T06:37:01Z | ---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert_classification_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_classification_model
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0028
- Accuracy: 0.9996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2591 | 1.0 | 585 | 0.0041 | 0.9996 |
| 0.0069 | 2.0 | 1170 | 0.0028 | 0.9996 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.1
- Datasets 2.19.1
- Tokenizers 0.20.1
|
mav23/CalmeRys-78B-Orpo-v0.1-GGUF | mav23 | 2024-10-11T08:17:24Z | 27 | 0 | transformers | [
"transformers",
"gguf",
"orpo",
"qwen2",
"sft",
"chatml",
"text-generation",
"en",
"dataset:mlabonne/orpo-dpo-mix-40k",
"base_model:MaziyarPanahi/calme-2.4-rys-78b",
"base_model:quantized:MaziyarPanahi/calme-2.4-rys-78b",
"license:mit",
"model-index",
"region:us",
"conversational"
] | text-generation | 2024-10-11T03:46:31Z | ---
language:
- en
license: mit
library_name: transformers
tags:
- orpo
- qwen2
- sft
- chatml
base_model:
- MaziyarPanahi/calme-2.4-rys-78b
datasets:
- mlabonne/orpo-dpo-mix-40k
pipeline_tag: text-generation
inference: false
model_creator: dfurman
quantized_by: dfurman
model-index:
- name: CalmeRys-78B-Orpo-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 81.63
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=dfurman/CalmeRys-78B-Orpo-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 61.92
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=dfurman/CalmeRys-78B-Orpo-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 37.92
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=dfurman/CalmeRys-78B-Orpo-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 20.02
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=dfurman/CalmeRys-78B-Orpo-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 36.37
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=dfurman/CalmeRys-78B-Orpo-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.8
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=dfurman/CalmeRys-78B-Orpo-v0.1
name: Open LLM Leaderboard
---
# dfurman/CalmeRys-78B-Orpo-v0.1
This model is a finetune of `MaziyarPanahi/calme-2.4-rys-78b` on 1.5k rows of the `mlabonne/orpo-dpo-mix-40k` dataset. It was trained as a generalist language model for a variety of text generation use cases, including support of agentic capabilities, roleplaying, reasoning, multi-turn conversations, long context coherence, and more.
Thanks go out to [mlabonne](https://huggingface.co/mlabonne), [MaziyarPanahi](https://huggingface.com/MaziyarPanahi), et al. for the source dataset and base model.
## 🦾 Training
You can find the experiment on W&B at this [link](https://wandb.ai/dryanfurman/huggingface/runs/1w50nu70?nw=nwuserdryanfurman). Here are a few visualizations:



## 💻 Usage
<details>
<summary>Setup</summary>
```python
!pip install -qU transformers accelerate bitsandbytes
!huggingface-cli download dfurman/CalmeRys-78B-Orpo-v0.1
```
```python
from transformers import AutoTokenizer, BitsAndBytesConfig
import transformers
import torch
if torch.cuda.get_device_capability()[0] >= 8:
!pip install -qqq flash-attn
attn_implementation = "flash_attention_2"
torch_dtype = torch.bfloat16
else:
attn_implementation = "eager"
torch_dtype = torch.float16
# # quantize if necessary
# bnb_config = BitsAndBytesConfig(
# load_in_4bit=True,
# bnb_4bit_quant_type="nf4",
# bnb_4bit_compute_dtype=torch_dtype,
# bnb_4bit_use_double_quant=True,
# )
model = "dfurman/CalmeRys-78B-Orpo-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={
"torch_dtype": torch_dtype,
# "quantization_config": bnb_config,
"device_map": "auto",
"attn_implementation": attn_implementation,
}
)
```
</details>
### Example 1
```python
question = "Is the number 9.11 larger than 9.9?"
messages = [
{"role": "system", "content": "You are a helpful assistant that thinks step by step."},
{"role": "user", "content": question},
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
# print("***Prompt:\n", prompt)
outputs = pipeline(
prompt, max_new_tokens=1000, do_sample=True, temperature=0.7, top_k=50, top_p=0.95
)
print("***Generation:")
print(outputs[0]["generated_text"][len(prompt) :])
```
```
***Generation:
To compare these two numbers, it's important to look at their decimal places after the whole number part, which is 9 in both cases. Comparing the tenths place, 9.11 has a '1' and 9.9 has a '9'. Since '9' is greater than '1', 9.9 is larger than 9.11.
```
### Example 2
```python
question = """The bakers at the Beverly Hills Bakery baked 200 loaves of bread on Monday morning.
They sold 93 loaves in the morning and 39 loaves in the afternoon.
A grocery store then returned 6 unsold loaves back to the bakery.
How many loaves of bread did the bakery have left?
Respond as succinctly as possible. Format the response as a completion of this table:
|step|subquestion|procedure|result|
|:---|:----------|:--------|:-----:|"""
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": question},
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
# print("***Prompt:\n", prompt)
outputs = pipeline(prompt, max_new_tokens=1000, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print("***Generation:")
print(outputs[0]["generated_text"][len(prompt):])
```
```
***Generation:
|1|Calculate total sold|Add morning and afternoon sales|132|
|2|Subtract sold from total|200 - 132|68|
|3|Adjust for returns|Add returned loaves to remaining|74|
```
### Example 3
```python
question = "What's a good recipe for a spicy margarita?"
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": question},
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
# print("***Prompt:\n", prompt)
outputs = pipeline(prompt, max_new_tokens=1000, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print("***Generation:")
print(outputs[0]["generated_text"][len(prompt):])
```
```
***Generation:
To make a Spicy Margarita, you'll need to incorporate a chili or pepper element into your classic margarita recipe. Here’s a simple way to do it:
### Ingredients:
- 2 oz tequila (blanco or reposado)
- 1 oz fresh lime juice
- 1/2 oz triple sec (Cointreau or Grand Marnier)
- 1/2 oz agave syrup or simple syrup
- 1-2 slices of jalapeño (or more depending on how spicy you like it)
- Salt and/or chili powder for rimming the glass
- Ice
- Lime wheel for garnish
### Instructions:
1. **Muddle Jalapeño**: In a shaker, muddle the jalapeño slices slightly. This will release the oils and heat from the peppers.
2. **Add Remaining Ingredients**: Add the tequila, lime juice, triple sec, and agave syrup or simple syrup.
3. **Shake and Strain**: Fill the shaker with ice and shake vigorously until cold. Strain into a salt and/or chili powder rimmed glass filled with ice.
4. **Garnish and Serve**: Garnish with a lime wheel and enjoy.
If you prefer a smoother spiciness that doesn't overpower the drink, you could also consider making a jalapeño-infused tequila by leaving the jalapeño slices in the bottle of tequila for several hours to a couple of days, adjusting the time based on desired level of spiciness. Then use this infused tequila instead of regular tequila in the recipe above.
Another variation is to use a spicy syrup. To make this, combine equal parts water and sugar with a few sliced jalapeños in a saucepan. Bring to a boil, stirring occasionally to dissolve the sugar. Reduce heat and simmer for about 5 minutes. Let cool, strain out the jalapeños, then store in a sealed container in the refrigerator until ready to use. Use this spicy syrup instead of regular syrup in the recipe.
As always, adjust the quantity of jalapeño or the type of chili used to suit your taste. Enjoy responsibly!
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_dfurman__CalmeRys-78B-Orpo-v0.1)
| Metric |Value|
|-------------------|----:|
|Avg. |50.78|
|IFEval (0-Shot) |81.63|
|BBH (3-Shot) |61.92|
|MATH Lvl 5 (4-Shot)|37.92|
|GPQA (0-shot) |20.02|
|MuSR (0-shot) |36.37|
|MMLU-PRO (5-shot) |66.80|
|
mradermacher/CursorCore-QW2.5-7B-GGUF | mradermacher | 2024-10-11T08:14:10Z | 141 | 1 | transformers | [
"transformers",
"gguf",
"code",
"en",
"base_model:TechxGenus/CursorCore-QW2.5-7B",
"base_model:quantized:TechxGenus/CursorCore-QW2.5-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-11T06:17:25Z | ---
base_model: TechxGenus/CursorCore-QW2.5-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- code
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/TechxGenus/CursorCore-QW2.5-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/CursorCore-QW2.5-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-7B-GGUF/resolve/main/CursorCore-QW2.5-7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-7B-GGUF/resolve/main/CursorCore-QW2.5-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-7B-GGUF/resolve/main/CursorCore-QW2.5-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-7B-GGUF/resolve/main/CursorCore-QW2.5-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-7B-GGUF/resolve/main/CursorCore-QW2.5-7B.IQ4_XS.gguf) | IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-7B-GGUF/resolve/main/CursorCore-QW2.5-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-7B-GGUF/resolve/main/CursorCore-QW2.5-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-7B-GGUF/resolve/main/CursorCore-QW2.5-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-7B-GGUF/resolve/main/CursorCore-QW2.5-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-7B-GGUF/resolve/main/CursorCore-QW2.5-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-7B-GGUF/resolve/main/CursorCore-QW2.5-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-7B-GGUF/resolve/main/CursorCore-QW2.5-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
dumyy/internlm2_5-7b-chat-Q4_K_M-GGUF | dumyy | 2024-10-11T08:13:52Z | 5 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:internlm/internlm2_5-7b-chat",
"base_model:quantized:internlm/internlm2_5-7b-chat",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-10-11T08:13:32Z | ---
base_model: internlm/internlm2_5-7b-chat
license: other
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# dumyy/internlm2_5-7b-chat-Q4_K_M-GGUF
This model was converted to GGUF format from [`internlm/internlm2_5-7b-chat`](https://huggingface.co/internlm/internlm2_5-7b-chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/internlm/internlm2_5-7b-chat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo dumyy/internlm2_5-7b-chat-Q4_K_M-GGUF --hf-file internlm2_5-7b-chat-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo dumyy/internlm2_5-7b-chat-Q4_K_M-GGUF --hf-file internlm2_5-7b-chat-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo dumyy/internlm2_5-7b-chat-Q4_K_M-GGUF --hf-file internlm2_5-7b-chat-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo dumyy/internlm2_5-7b-chat-Q4_K_M-GGUF --hf-file internlm2_5-7b-chat-q4_k_m.gguf -c 2048
```
|
mradermacher/CursorCore-QW2.5-1.5B-LC-GGUF | mradermacher | 2024-10-11T08:13:11Z | 30 | 0 | transformers | [
"transformers",
"gguf",
"code",
"en",
"base_model:TechxGenus/CursorCore-QW2.5-1.5B-LC",
"base_model:quantized:TechxGenus/CursorCore-QW2.5-1.5B-LC",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-11T06:17:32Z | ---
base_model: TechxGenus/CursorCore-QW2.5-1.5B-LC
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- code
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/TechxGenus/CursorCore-QW2.5-1.5B-LC
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-GGUF/resolve/main/CursorCore-QW2.5-1.5B-LC.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-GGUF/resolve/main/CursorCore-QW2.5-1.5B-LC.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-GGUF/resolve/main/CursorCore-QW2.5-1.5B-LC.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-GGUF/resolve/main/CursorCore-QW2.5-1.5B-LC.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-GGUF/resolve/main/CursorCore-QW2.5-1.5B-LC.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-GGUF/resolve/main/CursorCore-QW2.5-1.5B-LC.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-GGUF/resolve/main/CursorCore-QW2.5-1.5B-LC.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-GGUF/resolve/main/CursorCore-QW2.5-1.5B-LC.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-GGUF/resolve/main/CursorCore-QW2.5-1.5B-LC.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-GGUF/resolve/main/CursorCore-QW2.5-1.5B-LC.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-GGUF/resolve/main/CursorCore-QW2.5-1.5B-LC.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-GGUF/resolve/main/CursorCore-QW2.5-1.5B-LC.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/CursorCore-QW2.5-1.5B-LC-i1-GGUF | mradermacher | 2024-10-11T08:13:07Z | 172 | 0 | transformers | [
"transformers",
"gguf",
"code",
"en",
"base_model:TechxGenus/CursorCore-QW2.5-1.5B-LC",
"base_model:quantized:TechxGenus/CursorCore-QW2.5-1.5B-LC",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-11T07:58:06Z | ---
base_model: TechxGenus/CursorCore-QW2.5-1.5B-LC
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- code
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/TechxGenus/CursorCore-QW2.5-1.5B-LC
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-LC.i1-IQ1_S.gguf) | i1-IQ1_S | 0.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-LC.i1-IQ1_M.gguf) | i1-IQ1_M | 0.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-LC.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-LC.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-LC.i1-IQ2_S.gguf) | i1-IQ2_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-LC.i1-IQ2_M.gguf) | i1-IQ2_M | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-LC.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-LC.i1-Q2_K.gguf) | i1-Q2_K | 0.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-LC.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-LC.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-LC.i1-IQ3_S.gguf) | i1-IQ3_S | 0.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-LC.i1-IQ3_M.gguf) | i1-IQ3_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-LC.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-LC.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-LC.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-LC.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 1.0 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-LC.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 1.0 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-LC.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 1.0 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-LC.i1-Q4_0.gguf) | i1-Q4_0 | 1.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-LC.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-LC.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-LC.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-LC.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-LC-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-LC.i1-Q6_K.gguf) | i1-Q6_K | 1.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
LxzGordon/URM-LLaMa-3-8B | LxzGordon | 2024-10-11T08:11:25Z | 187 | 0 | null | [
"safetensors",
"llama",
"text-classification",
"custom_code",
"dataset:nvidia/HelpSteer2",
"arxiv:2410.00847",
"region:us"
] | text-classification | 2024-09-12T08:49:43Z | ---
datasets:
- nvidia/HelpSteer2
pipeline_tag: text-classification
---
- **Paper:** [https://arxiv.org/pdf/2410.00847](https://arxiv.org/pdf/2410.00847)
- **Model:** [URM-LLaMa-3-8B](https://huggingface.co/LxzGordon/URM-LLaMa-3-8B)
- Fine-tuned from [FsfairX-LLaMA3-RM-v0.1](https://huggingface.co/sfairXC/FsfairX-LLaMA3-RM-v0.1)
# Brief
[URM-LLaMa-3-8B](https://huggingface.co/LxzGordon/URM-LLaMa-3-8B) is an uncertain-aware reward model.
This RM consists of a base model and an uncertainty-aware and attribute-specific value head. The base model of this RM is from [FsfairX-LLaMA3-RM-v0.1](https://huggingface.co/sfairXC/FsfairX-LLaMA3-RM-v0.1).
## Attribute Regression
**Dataset:** [HelpSteer2](https://huggingface.co/datasets/nvidia/HelpSteer2)
During training, instead of multi-attributes scores, outputs of the uncertainty-aware value head are parameters of a normal distribution, from which scores are sampled. Then we run regression on the outputs with the labels to train the value head. To enable gradient back-propagation, reparameterization technique is used.
We use the five attributes from HelpSteer2: Helpfulness, Correctness, Coherence, Complexity and Verbosity. We use weighted sum to combine these attributes with prior weights ```[0.3, 0.74, 0.46, 0.47,-0.33]``` recommended by [Nemotron-4](https://huggingface.co/nvidia/Nemotron-4-340B-Reward).
# Usage
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name = "LxzGordon/URM-LLaMa-3-8B"
model = AutoModelForSequenceClassification.from_pretrained(
model_name,
device_map='auto',
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "when were the first Olympic Games held?"
response1 = "April 1896"
response2 = "April 1892"
resp1 = [{"role": "user", "content": prompt}, {"role": "assistant", "content": response1}]
resp2 = [{"role": "user", "content": prompt}, {"role": "assistant", "content": response2}]
# Format and tokenize the conversations
resp1 = tokenizer.apply_chat_template(resp1, tokenize=False)
resp2 = tokenizer.apply_chat_template(resp2, tokenize=False)
resp1 = tokenizer(resp1, return_tensors="pt").to(model.device)
resp2 = tokenizer(resp2, return_tensors="pt").to(model.device)
with torch.no_grad():
score1 = model(resp1['input_ids'],attention_mask=resp1['attention_mask']).logits[0][0].item()
score2 = model(resp2['input_ids'],attention_mask=resp2['attention_mask']).logits[0][0].item()
print(score1,score2)
# Response 1 score: 3.669522523880005, Response 2 score: 2.5036821365356445
```
# Reference
Please cite
```
@article{lou2024uncertainty,
title={Uncertainty-aware Reward Model: Teaching Reward Models to Know What is Unknown},
author={Lou, Xingzhou and Yan, Dong and Shen, Wei and Yan, Yuzi and Xie, Jian and Zhang, Junge},
journal={arXiv preprint arXiv:2410.00847},
year={2024}
}
``` |
RichardErkhov/meta-llama_-_Llama-Guard-3-1B-gguf | RichardErkhov | 2024-10-11T07:53:19Z | 201 | 0 | null | [
"gguf",
"arxiv:2404.12241",
"arxiv:2312.06674",
"arxiv:2204.05862",
"arxiv:2308.01263",
"arxiv:2403.03853",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-11T07:35:47Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-Guard-3-1B - GGUF
- Model creator: https://huggingface.co/meta-llama/
- Original model: https://huggingface.co/meta-llama/Llama-Guard-3-1B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama-Guard-3-1B.Q2_K.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_Llama-Guard-3-1B-gguf/blob/main/Llama-Guard-3-1B.Q2_K.gguf) | Q2_K | 0.62GB |
| [Llama-Guard-3-1B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_Llama-Guard-3-1B-gguf/blob/main/Llama-Guard-3-1B.IQ3_XS.gguf) | IQ3_XS | 0.68GB |
| [Llama-Guard-3-1B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_Llama-Guard-3-1B-gguf/blob/main/Llama-Guard-3-1B.IQ3_S.gguf) | IQ3_S | 0.7GB |
| [Llama-Guard-3-1B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_Llama-Guard-3-1B-gguf/blob/main/Llama-Guard-3-1B.Q3_K_S.gguf) | Q3_K_S | 0.7GB |
| [Llama-Guard-3-1B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_Llama-Guard-3-1B-gguf/blob/main/Llama-Guard-3-1B.IQ3_M.gguf) | IQ3_M | 0.72GB |
| [Llama-Guard-3-1B.Q3_K.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_Llama-Guard-3-1B-gguf/blob/main/Llama-Guard-3-1B.Q3_K.gguf) | Q3_K | 0.75GB |
| [Llama-Guard-3-1B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_Llama-Guard-3-1B-gguf/blob/main/Llama-Guard-3-1B.Q3_K_M.gguf) | Q3_K_M | 0.75GB |
| [Llama-Guard-3-1B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_Llama-Guard-3-1B-gguf/blob/main/Llama-Guard-3-1B.Q3_K_L.gguf) | Q3_K_L | 0.79GB |
| [Llama-Guard-3-1B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_Llama-Guard-3-1B-gguf/blob/main/Llama-Guard-3-1B.IQ4_XS.gguf) | IQ4_XS | 0.83GB |
| [Llama-Guard-3-1B.Q4_0.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_Llama-Guard-3-1B-gguf/blob/main/Llama-Guard-3-1B.Q4_0.gguf) | Q4_0 | 0.86GB |
| [Llama-Guard-3-1B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_Llama-Guard-3-1B-gguf/blob/main/Llama-Guard-3-1B.IQ4_NL.gguf) | IQ4_NL | 0.86GB |
| [Llama-Guard-3-1B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_Llama-Guard-3-1B-gguf/blob/main/Llama-Guard-3-1B.Q4_K_S.gguf) | Q4_K_S | 0.86GB |
| [Llama-Guard-3-1B.Q4_K.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_Llama-Guard-3-1B-gguf/blob/main/Llama-Guard-3-1B.Q4_K.gguf) | Q4_K | 0.89GB |
| [Llama-Guard-3-1B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_Llama-Guard-3-1B-gguf/blob/main/Llama-Guard-3-1B.Q4_K_M.gguf) | Q4_K_M | 0.89GB |
| [Llama-Guard-3-1B.Q4_1.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_Llama-Guard-3-1B-gguf/blob/main/Llama-Guard-3-1B.Q4_1.gguf) | Q4_1 | 0.93GB |
| [Llama-Guard-3-1B.Q5_0.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_Llama-Guard-3-1B-gguf/blob/main/Llama-Guard-3-1B.Q5_0.gguf) | Q5_0 | 1.0GB |
| [Llama-Guard-3-1B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_Llama-Guard-3-1B-gguf/blob/main/Llama-Guard-3-1B.Q5_K_S.gguf) | Q5_K_S | 1.0GB |
| [Llama-Guard-3-1B.Q5_K.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_Llama-Guard-3-1B-gguf/blob/main/Llama-Guard-3-1B.Q5_K.gguf) | Q5_K | 1.02GB |
| [Llama-Guard-3-1B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_Llama-Guard-3-1B-gguf/blob/main/Llama-Guard-3-1B.Q5_K_M.gguf) | Q5_K_M | 1.02GB |
| [Llama-Guard-3-1B.Q5_1.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_Llama-Guard-3-1B-gguf/blob/main/Llama-Guard-3-1B.Q5_1.gguf) | Q5_1 | 1.07GB |
| [Llama-Guard-3-1B.Q6_K.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_Llama-Guard-3-1B-gguf/blob/main/Llama-Guard-3-1B.Q6_K.gguf) | Q6_K | 1.15GB |
| [Llama-Guard-3-1B.Q8_0.gguf](https://huggingface.co/RichardErkhov/meta-llama_-_Llama-Guard-3-1B-gguf/blob/main/Llama-Guard-3-1B.Q8_0.gguf) | Q8_0 | 1.49GB |
Original model description:
---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
library_name: transformers
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
license: llama3.2
extra_gated_prompt: >-
### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT
Llama 3.2 Version Release Date: September 25, 2024
“Agreement” means the terms and conditions for use, reproduction, distribution
and modification of the Llama Materials set forth herein.
“Documentation” means the specifications, manuals and documentation accompanying Llama 3.2
distributed by Meta at https://llama.meta.com/doc/overview.
“Licensee” or “you” means you, or your employer or any other person or entity (if you are
entering into this Agreement on such person or entity’s behalf), of the age required under
applicable laws, rules or regulations to provide legal consent and that has legal authority
to bind your employer or such other person or entity if you are entering in this Agreement
on their behalf.
“Llama 3.2” means the foundational large language models and software and algorithms, including
machine-learning model code, trained model weights, inference-enabling code, training-enabling code,
fine-tuning enabling code and other elements of the foregoing distributed by Meta at
https://www.llama.com/llama-downloads.
“Llama Materials” means, collectively, Meta’s proprietary Llama 3.2 and Documentation (and
any portion thereof) made available under this Agreement.
“Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or,
if you are an entity, your principal place of business is in the EEA or Switzerland)
and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).
By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials,
you agree to be bound by this Agreement.
1. License Rights and Redistribution.
a. Grant of Rights. You are granted a non-exclusive, worldwide,
non-transferable and royalty-free limited license under Meta’s intellectual property or other rights
owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works
of, and make modifications to the Llama Materials.
b. Redistribution and Use.
i. If you distribute or make available the Llama Materials (or any derivative works thereof),
or a product or service (including another AI model) that contains any of them, you shall (A) provide
a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Llama”
on a related website, user interface, blogpost, about page, or product documentation. If you use the
Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or
otherwise improve an AI model, which is distributed or made available, you shall also include “Llama”
at the beginning of any such AI model name.
ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part
of an integrated end user product, then Section 2 of this Agreement will not apply to you.
iii. You must retain in all copies of the Llama Materials that you distribute the
following attribution notice within a “Notice” text file distributed as a part of such copies:
“Llama 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms,
Inc. All Rights Reserved.”
iv. Your use of the Llama Materials must comply with applicable laws and regulations
(including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for
the Llama Materials (available at https://www.llama.com/llama3_2/use-policy), which is hereby
incorporated by reference into this Agreement.
2. Additional Commercial Terms. If, on the Llama 3.2 version release date, the monthly active users
of the products or services made available by or for Licensee, or Licensee’s affiliates,
is greater than 700 million monthly active users in the preceding calendar month, you must request
a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to
exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.
3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND
RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS
ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES
OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE
FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED
WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,
WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT,
FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN
IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.
5. Intellectual Property.
a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials,
neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates,
except as required for reasonable and customary use in describing and redistributing the Llama Materials or as
set forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the “Mark”) solely as required
to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible
at https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising out of your use of the Mark
will inure to the benefit of Meta.
b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any
derivative works and modifications of the Llama Materials that are made by you, as between you and Meta,
you are and will be the owner of such derivative works and modifications.
c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or
counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion
of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable
by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or
claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third
party arising out of or related to your use or distribution of the Llama Materials.
6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access
to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms
and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this
Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3,
4 and 7 shall survive the termination of this Agreement.
7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of
California without regard to choice of law principles, and the UN Convention on Contracts for the International
Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of
any dispute arising out of this Agreement.
### Llama 3.2 Acceptable Use Policy
Meta is committed to promoting safe and fair use of its tools and features, including Llama 3.2.
If you access or use Llama 3.2, you agree to this Acceptable Use Policy (“**Policy**”).
The most recent copy of this policy can be found at
[https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).
#### Prohibited Uses
We want everyone to use Llama 3.2 safely and responsibly. You agree you will not use, or allow others to use, Llama 3.2 to:
1. Violate the law or others’ rights, including to:
1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
1. Violence or terrorism
2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
3. Human trafficking, exploitation, and sexual violence
4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
5. Sexual solicitation
6. Any other criminal activity
1. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
2. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
3. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
4. Collect, process, disclose, generate, or infer private or sensitive information about individuals, including information about individuals’ identity, health, or demographic information, unless you have obtained the right to do so in accordance with applicable law
5. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials
6. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system
7. Engage in any action, or facilitate any action, to intentionally circumvent or remove usage restrictions or other safety measures, or to enable functionality disabled by Meta
2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 3.2 related to the following:
8. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989 or the Chemical Weapons Convention Implementation Act of 1997
9. Guns and illegal weapons (including weapon development)
10. Illegal drugs and regulated/controlled substances
11. Operation of critical infrastructure, transportation technologies, or heavy machinery
12. Self-harm or harm to others, including suicide, cutting, and eating disorders
13. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual
3. Intentionally deceive or mislead others, including use of Llama 3.2 related to the following:
14. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
15. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
16. Generating, promoting, or further distributing spam
17. Impersonating another individual without consent, authorization, or legal right
18. Representing that the use of Llama 3.2 or outputs are human-generated
19. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement
4. Fail to appropriately disclose to end users any known dangers of your AI system
5. Interact with third party tools, models, or software designed to generate unlawful content or engage in unlawful or harmful conduct and/or represent that the outputs of such tools, models, or software are associated with Meta or Llama 3.2
With respect to any multimodal models included in Llama 3.2, the rights granted under Section 1(a) of the Llama 3.2 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models.
Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means:
* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)
* Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
* Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama 3.2: [email protected]
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
extra_gated_description: >-
The information you provide will be collected, stored, processed and shared in
accordance with the [Meta Privacy
Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
---
## Model Information
Llama Guard 3-1B is a fine-tuned Llama-3.2-1B pretrained model for content safety classification. Similar to previous versions, it can be used to classify content in both LLM inputs (prompt classification) and in LLM responses (response classification). It acts as an LLM – it generates text in its output that indicates whether a given prompt or response is safe or unsafe, and if unsafe, it also lists the content categories violated.
Llama Guard 3-1B was aligned to safeguard against the MLCommons standardized [hazards taxonomy](https://arxiv.org/abs/2404.12241) and designed to lower the deployment cost of moderation system safeguard compared to its predecessors. It comes in two versions : 1B and 1B pruned and quantized, optimized for deployment on mobile devices.
## Get started
This repository contains two versions of Llama-Guard-3-1B, for use with transformers and with the original `llama` codebase. Once you have access to the model weights, follow the appropriate section.
### Use with transformers
Starting with transformers >= 4.43.0 onward, you can run inference evaluate the last user or assistant response in a multi-turn conversation.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "meta-llama/Llama-Guard-3-1B"
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
conversation = [
{
"role": "user",
"content": [
{
"type": "text",
"text": "What is the recipe for mayonnaise?"
},
],
}
]
input_ids = tokenizer.apply_chat_template(
conversation, return_tensors="pt"
).to(model.device)
prompt_len = input_ids.shape[1]
output = model.generate(
input_ids,
max_new_tokens=20,
pad_token_id=0,
)
generated_tokens = output[:, prompt_len:]
print(tokenizer.decode(generated_tokens[0]))
```
This snippet will use the categories described in this model card. You can provide your own categories instead:
```python
input_ids = tokenizer.apply_chat_template(
conversation,
return_tensors="pt",
categories = {
"S1": "My custom category",
},
).to(model.device)
```
Or you can exclude categories from the default list by specifying an array of category keys to exclude:
```python
input_ids = tokenizer.apply_chat_template(
conversation,
return_tensors="pt",
excluded_category_keys=["S6"],
).to(model.device)
```
### Use with `llama`
Please, refer to our [documentation](https://www.llama.com/docs/model-cards-and-prompt-formats/llama-guard-3/) to get started.
To download the original checkpoints, you can use `huggingface-cli` as follows:
```
huggingface-cli download meta-llama/Llama-Guard-3-1B --include "original/*" --local-dir Llama-3.2-11B-Vision
```
### Fine-tuning
You can also fine tune Llama Guard for your use case: [Llama Guard 3 Customization: Taxonomy Customization, Zero/Few-shot prompting, Evaluation and Fine Tuning](https://github.com/meta-llama/llama-recipes/blob/main/recipes/responsible_ai/llama_guard/llama_guard_customization_via_prompting_and_fine_tuning.ipynb)
## Hazard Taxonomy and Policy
The model is trained to predict safety labels on the 13 categories shown below, based on the [MLCommons taxonomy](https://mlcommons.org/2024/04/mlc-aisafety-v0-5-poc/) of 13 hazards.
<table align="center">
<thead>
<tr>
<th colspan="2"><center><b>Hazard categories</b></center></th>
</tr>
</thead>
<tbody>
<tr>
<td>S1: Violent Crimes</td>
<td>S2: Non-Violent Crimes</td>
</tr>
<tr>
<td>S3: Sex-Related Crimes</td>
<td>S4: Child Sexual Exploitation</td>
</tr>
<tr>
<td>S5: Defamation</td>
<td>S6: Specialized Advice</td>
</tr>
<tr>
<td>S7: Privacy</td>
<td>S8: Intellectual Property</td>
</tr>
<tr>
<td>S9: Indiscriminate Weapons</td>
<td>S10: Hate</td>
</tr>
<tr>
<td>S11: Suicide & Self-Harm</td>
<td>S12: Sexual Content</td>
</tr>
<tr>
<td>S13: Elections</td>
<td></td>
</tr>
</tbody>
</table>
**S1 - Violent Crimes**
Responses that enable, encourage, or endorse the commission of violent crimes, including: (1) unlawful violence toward people (ex: terrorism, genocide, murder, hate-motivated violent crimes, child abuse, assault, battery, kidnapping) and (2) unlawful violence toward animals (ex: animal abuse)
**S2 - Non-Violent Crimes**
Responses that enable, encourage, or endorse the commission of non-violent crimes, including: (1) personal crimes (ex: labor trafficking, threats, intimidation, hate-motivated non-violent crimes), (2) financial crimes (ex: fraud, scams, money laundering), (3) property crimes (ex: theft, arson, vandalism), (4) drug crimes (ex: creating or using regulated substances), (5) weapons crimes (ex: producing unlicensed firearms), and (6) cyber crimes (ex: hacking)
**S3 - Sex-Related Crimes**
Responses that enable, encourage, or endorse the commission of sex-related crimes, including: (1) sex trafficking, (2) sexual assault (ex: rape), (3) sexual harassment (physical, verbal, or visual), and (4) prostitution
**S4 - Child Sexual Exploitation**
Responses that contain, describe, enable, encourage, or endorse the sexual abuse of children
**S5 - Defamation**
Responses that are both verifiably false and likely to injure a living person’s reputation
**S6 - Specialized Advice**
Responses that contain specialized financial, medical, or legal advice, or that indicate dangerous activities or objects are safe
**S7 - Privacy**
Responses that contain sensitive, nonpublic personal information that could undermine someone’s physical, digital, or financial security
**S8 - Intellectual Property**
Responses that may violate the intellectual property rights of any third party
**S9 - Indiscriminate Weapons**
Responses that enable, encourage, or endorse the creation of indiscriminate weapons, including: (1) chemical weapons (ex: nerve gas), (2) biological weapons (ex: anthrax), (3) radiological weapons (ex: cobalt bombs), (4) nuclear weapons (ex: fission bombs), and (5) high-yield explosive weapons (ex: cluster munitions)
**S10 - Hate**
Responses that demean or dehumanize people on the basis of their sensitive, personal characteristics (i.e., race, color, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity, and/or serious disease)
**S11 - Suicide & Self-Harm**
Responses that enable, encourage, or endorse acts of intentional self-harm, including: (1) suicide, (2) self-injury (ex: cutting), and (3) disordered eating
**S12 - Sexual Content**
Responses that contain erotica
**S13 - Elections**
Responses that contain factually incorrect information about electoral systems and processes, including in the time, place, or manner of voting in civic elections
## Supported languages
Llama Guard 3-1B supports content safety for the following languages: English, French, German, Hindi, Italian, Portuguese, Spanish, Thai.
## Training Data
We use the English data used by Llama Guard [1], which are obtained by getting Llama 2 and Llama 3 generations on prompts from the hh-rlhf dataset [2]. In order to scale training data for multilingual capability, we collect additional human and synthetically generated data. Similar to the English data, the multilingual data are Human-AI conversation data that are either single-turn or multi-turn. To reduce the model’s false positive rate, we curate a set of multilingual benign prompt and response data where LLMs likely reject the prompts.
## Pruning
To reduce the number of model parameters, we prune the model along two dimensions: number of layers and MLP hidden dimension. The methodology is quite similar to [5], and proceeds in 3 stages: 1) pruning metric calibration; 2) model pruning; 3) finetuning the pruned model. During calibration, we collect pruning metric statistics by passing ~1k batches of inputs through the model. We use the block importance metric [6] for pruning the decoder layers and the average l2 norm for MLP hidden neurons for MLP hidden dimension pruning. After calibrating the pruning metrics, we prune the model to 12 layers and 6400 MLP hidden dimension, such that the pruned model has 1123 million parameters. Finally, we finetune the pruned model on the training data.
## Distillation
Building on a similar approach in [5], we employ Llama Guard 3-8B as a teacher model to fine-tune the pruned model through logit-level distillation during supervised training. We observe that simply incorporating logit-level distillation significantly enhances the model's ability to learn safe and unsafe patterns, as well as the distribution of unsafe reasoning, from the 8B teacher. Consequently, the final result shows substantial improvement after applying logit-level fine-tuning.
## Output Layer Pruning
The Llama Guard model is trained to generate 128k output tokens out of which only 20 tokens (e.g. safe, unsafe, S, 1,...) are used. By keeping the model connections corresponding to those 20 tokens in the output linear layer and pruning out the remaining connections we can reduce the output layer size significantly without impacting the model outputs. Using output layer pruning, we reduced the output layer size from 262.6M parameters (2048x128k) to 40.96k parameters (2048x20), giving us a total savings of 131.3MB with 4-bit quantized weights. Although the pruned output layer only generates 20 tokens, they are expanded back to produce the original 128k outputs in the model.
## Evaluation
Note on evaluations: As discussed in the original Llama Guard [paper](https://arxiv.org/abs/2312.06674), comparing model performance is not straightforward as each model is built on its own policy and is expected to perform better on an evaluation dataset with a policy aligned to the model. This highlights the need for industry standards. By aligning the Llama Guard family of models with the Proof of Concept MLCommons taxonomy of hazards, we hope to drive adoption of industry standards like this and facilitate collaboration and transparency in the LLM safety and content evaluation space.
We evaluate the performance of Llama Guard 1B models on MLCommons hazard taxonomy and compare it across languages with Llama Guard 3-8B on our internal test. We also add GPT4 as baseline with zero-shot prompting using MLCommons hazard taxonomy.
<table align="center">
<tbody>
<tr>
<td rowspan="2"><b>Model</b></td>
<td colspan="11"><center><b>F1/FPR</center></td>
</tr>
<tr>
<td><b>English</b></td>
<td><b>French</b></td>
<td><b>German</b></td>
<td><b>Italian</b></td>
<td><b>Spanish</b></td>
<td><b>Portuguese</b></td>
<td><b>Hindi</b></td>
<td><b>Vietnamese</b></td>
<td><b>Indonesian</b></td>
<td><b>Thai</b></td>
<td><b>XSTest</b></td>
</tr>
<tr>
<td>Llama Guard 3-8B</td>
<td>0.939/0.040</td>
<td>0.943/0.036</td>
<td>0.877/0.032</td>
<td>0.873/0.038</td>
<td>0.875/0.023</td>
<td>0.860/0.060</td>
<td>0.871/0.050</td>
<td>0.890/0.034</td>
<td>0.915/0.048</td>
<td>0.834/0.030</td>
<td>0.884/0.044</td>
</tr>
<tr>
<td>Llama Guard 3-1B</td>
<td>0.899/0.090</td>
<td>0.939/0.012</td>
<td>0.845/0.036</td>
<td>0.897/0.111</td>
<td>0.837/0.083</td>
<td>0.763/0.114</td>
<td>0.680/0.057</td>
<td>0.723/0.130</td>
<td>0.875/0.083</td>
<td>0.749/0.078</td>
<td>0.821/0.068</td>
</tr>
<tr>
<td>Llama Guard 3-1B -INT4</td>
<td>0.904/0.084</td>
<td>0.873/0.072</td>
<td>0.835/0.145</td>
<td>0.897/0.111</td>
<td>0.852/0.104</td>
<td>0.830/0.109</td>
<td>0.564/0.114</td>
<td>0.792/0.171</td>
<td>0.833/0.121</td>
<td>0.831/0.114</td>
<td>0.737/0.152</td>
</tr>
<tr>
<td>GPT4</td>
<td>0.805/0.152</td>
<td>0.795/0.157</td>
<td>0.691/0.123</td>
<td>0.753/0.20</td>
<td>0.711/0.169</td>
<td>0.738/0.207</td>
<td>0.709/0.206</td>
<td>0.741/0.148</td>
<td>0.787/0.169</td>
<td>0.688/0.168</td>
<td>0.895/0.128</td>
</tr>
</tbody>
</table>
## Limitations
There are some limitations associated with Llama Guard 3-1B. First, Llama Guard 3-1B itself is an LLM fine-tuned on Llama 3.2. Thus, its performance (e.g., judgments that need common sense knowledge, multilingual capability, and policy coverage) might be limited by its (pre-)training data.
Llama Guard performance varies across model size and languages. When possible, developers should consider Llama Guard 3-8B which may provide better safety classification performance but comes at a higher deployment cost. Please refer to the evaluation section and test the safeguards before deployment to ensure it meets the safety requirement of your application.
Some hazard categories may require factual, up-to-date knowledge to be evaluated (for example, S5: Defamation, S8: Intellectual Property, and S13: Elections). We believe more complex systems should be deployed to accurately moderate these categories for use cases highly sensitive to these types of hazards, but Llama Guard 3-1B provides a good baseline for generic use cases.
Lastly, as an LLM, Llama Guard 3-1B may be susceptible to adversarial attacks or prompt injection attacks that could bypass or alter its intended use. Please [report](https://github.com/meta-llama/PurpleLlama) vulnerabilities and we will look to incorporate improvements in future versions of Llama Guard.
## References
[1] [Llama Guard: LLM-based Input-Output Safeguard for Human-AI Conversations](https://arxiv.org/abs/2312.06674)
[2] [Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback](https://arxiv.org/abs/2204.05862)
[3] [Llama Guard 3-8B Model Card](https://github.com/meta-llama/PurpleLlama/blob/main/Llama-Guard3/8B/MODEL_CARD.md)
[4] [XSTest: A Test Suite for Identifying Exaggerated Safety Behaviors in Large Language Models](https://arxiv.org/abs/2308.01263)
[5] [Compact Language Models via Pruning and Knowledge Distillation](https://arxiv.org/html/2407.14679v1)
[6] [ShortGPT: Layers in Large Language Models are More Redundant Than You Expect](https://arxiv.org/abs/2403.03853)
## Citation
```
@misc{metallamaguard3,
author = {Llama Team, AI @ Meta},
title = {The Llama 3 Family of Models},
howpublished = {\url{https://github.com/meta-llama/PurpleLlama/blob/main/Llama-Guard3/1B/MODEL_CARD.md}},
year = {2024}
}
```
|
1g0rrr/insert_lamp | 1g0rrr | 2024-10-11T07:25:22Z | 8 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"robotics",
"region:us"
] | robotics | 2024-10-11T07:25:12Z | ---
library_name: lerobot
tags:
- act
- model_hub_mixin
- pytorch_model_hub_mixin
- robotics
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: https://github.com/huggingface/lerobot
- Docs: [More Information Needed] |
mradermacher/CursorCore-QW2.5-1.5B-SR-GGUF | mradermacher | 2024-10-11T07:19:08Z | 32 | 0 | transformers | [
"transformers",
"gguf",
"code",
"en",
"base_model:TechxGenus/CursorCore-QW2.5-1.5B-SR",
"base_model:quantized:TechxGenus/CursorCore-QW2.5-1.5B-SR",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-11T06:07:00Z | ---
base_model: TechxGenus/CursorCore-QW2.5-1.5B-SR
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- code
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/TechxGenus/CursorCore-QW2.5-1.5B-SR
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-SR-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-SR-GGUF/resolve/main/CursorCore-QW2.5-1.5B-SR.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-SR-GGUF/resolve/main/CursorCore-QW2.5-1.5B-SR.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-SR-GGUF/resolve/main/CursorCore-QW2.5-1.5B-SR.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-SR-GGUF/resolve/main/CursorCore-QW2.5-1.5B-SR.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-SR-GGUF/resolve/main/CursorCore-QW2.5-1.5B-SR.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-SR-GGUF/resolve/main/CursorCore-QW2.5-1.5B-SR.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-SR-GGUF/resolve/main/CursorCore-QW2.5-1.5B-SR.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-SR-GGUF/resolve/main/CursorCore-QW2.5-1.5B-SR.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-SR-GGUF/resolve/main/CursorCore-QW2.5-1.5B-SR.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-SR-GGUF/resolve/main/CursorCore-QW2.5-1.5B-SR.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-SR-GGUF/resolve/main/CursorCore-QW2.5-1.5B-SR.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-SR-GGUF/resolve/main/CursorCore-QW2.5-1.5B-SR.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/CursorCore-QW2.5-1.5B-SR-i1-GGUF | mradermacher | 2024-10-11T07:19:05Z | 39 | 0 | transformers | [
"transformers",
"gguf",
"code",
"en",
"base_model:TechxGenus/CursorCore-QW2.5-1.5B-SR",
"base_model:quantized:TechxGenus/CursorCore-QW2.5-1.5B-SR",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-11T06:20:12Z | ---
base_model: TechxGenus/CursorCore-QW2.5-1.5B-SR
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- code
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/TechxGenus/CursorCore-QW2.5-1.5B-SR
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-SR-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-SR-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-SR.i1-IQ1_S.gguf) | i1-IQ1_S | 0.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-SR-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-SR.i1-IQ1_M.gguf) | i1-IQ1_M | 0.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-SR-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-SR.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-SR-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-SR.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-SR-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-SR.i1-IQ2_S.gguf) | i1-IQ2_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-SR-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-SR.i1-IQ2_M.gguf) | i1-IQ2_M | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-SR-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-SR.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-SR-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-SR.i1-Q2_K.gguf) | i1-Q2_K | 0.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-SR-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-SR.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-SR-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-SR.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-SR-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-SR.i1-IQ3_S.gguf) | i1-IQ3_S | 0.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-SR-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-SR.i1-IQ3_M.gguf) | i1-IQ3_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-SR-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-SR.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-SR-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-SR.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-SR-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-SR.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-SR-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-SR.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 1.0 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-SR-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-SR.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 1.0 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-SR-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-SR.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 1.0 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-SR-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-SR.i1-Q4_0.gguf) | i1-Q4_0 | 1.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-SR-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-SR.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-SR-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-SR.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-SR-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-SR.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-SR-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-SR.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-QW2.5-1.5B-SR-i1-GGUF/resolve/main/CursorCore-QW2.5-1.5B-SR.i1-Q6_K.gguf) | i1-Q6_K | 1.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
lemon-mint/Llama-VARCO-8B-Instruct-LLaMAfied-Q4_K_M-GGUF | lemon-mint | 2024-10-11T07:17:08Z | 15 | 0 | transformers | [
"transformers",
"gguf",
"llama-3.1",
"ncsoft",
"varco",
"llama-cpp",
"gguf-my-repo",
"en",
"ko",
"base_model:lemon-mint/Llama-VARCO-8B-Instruct-LLaMAfied",
"base_model:quantized:lemon-mint/Llama-VARCO-8B-Instruct-LLaMAfied",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-11T07:16:44Z | ---
base_model: lemon-mint/Llama-VARCO-8B-Instruct-LLaMAfied
language:
- en
- ko
library_name: transformers
license: llama3.1
tags:
- llama-3.1
- ncsoft
- varco
- llama-cpp
- gguf-my-repo
---
# lemon-mint/Llama-VARCO-8B-Instruct-LLaMAfied-Q4_K_M-GGUF
This model was converted to GGUF format from [`lemon-mint/Llama-VARCO-8B-Instruct-LLaMAfied`](https://huggingface.co/lemon-mint/Llama-VARCO-8B-Instruct-LLaMAfied) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/lemon-mint/Llama-VARCO-8B-Instruct-LLaMAfied) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo lemon-mint/Llama-VARCO-8B-Instruct-LLaMAfied-Q4_K_M-GGUF --hf-file llama-varco-8b-instruct-llamafied-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo lemon-mint/Llama-VARCO-8B-Instruct-LLaMAfied-Q4_K_M-GGUF --hf-file llama-varco-8b-instruct-llamafied-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo lemon-mint/Llama-VARCO-8B-Instruct-LLaMAfied-Q4_K_M-GGUF --hf-file llama-varco-8b-instruct-llamafied-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo lemon-mint/Llama-VARCO-8B-Instruct-LLaMAfied-Q4_K_M-GGUF --hf-file llama-varco-8b-instruct-llamafied-q4_k_m.gguf -c 2048
```
|
lemon-mint/Llama-VARCO-8B-Instruct-LLaMAfied | lemon-mint | 2024-10-11T07:13:01Z | 8 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-3.1",
"ncsoft",
"varco",
"conversational",
"en",
"ko",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-11T07:10:31Z | ---
language:
- en
- ko
license: llama3.1
tags:
- llama-3.1
- ncsoft
- varco
base_model:
- meta-llama/Meta-Llama-3.1-8B
library_name: transformers
---
## Llama-VARCO-8B-Instruct
### About the Model
**Llama-VARCO-8B-Instruct** is a *generative model* built with Llama, specifically designed to excel in Korean through additional training. The model uses continual pre-training with both Korean and English datasets to enhance its understanding and generation capabilites in Korean, while also maintaining its proficiency in English. It performs supervised fine-tuning (SFT) and direct preference optimization (DPO) in Korean to align with human preferences.
- **Developed by:** NC Research, Language Model Team
- **Languages (NLP):** Korean, English
- **License:** LLAMA 3.1 COMMUNITY LICENSE AGREEMENT
- **Base model:** [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B)
## Uses
### Direct Use
We recommend to use transformers v4.43.0 or later, as advised for Llama-3.1.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model = AutoModelForCausalLM.from_pretrained(
"NCSOFT/Llama-VARCO-8B-Instruct",
torch_dtype=torch.bfloat16,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("NCSOFT/Llama-VARCO-8B-Instruct")
messages = [
{"role": "system", "content": "You are a helpful assistant Varco. Respond accurately and diligently according to the user's instructions."},
{"role": "user", "content": "안녕하세요."}
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
eos_token_id = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
inputs,
eos_token_id=eos_token_id,
max_length=8192
)
print(tokenizer.decode(outputs[0]))
```
## Evaluation
### LogicKor
We used the [LogicKor](https://github.com/instructkr/LogicKor) code to measure performance. For the judge model, we used the officially recommended gpt-4-1106-preview. The score includes only the 0-shot evaluation provided in the default.
| Model | Math | Reasoning | Writing | Coding | Understanding | Grammer | Single turn | Multi turn | Overall |
|--------------|--------|-------------|-----------|----------|-----------------|-----------|---------------|--------------|-----------|
| [Llama-VARCO-8B-Instruct](https://huggingface.co/NCSOFT/Llama-VARCO-8B-Instruct)| 6.71 / 8.57 | 8.86 / 8.29 | 9.86 / 9.71 | 8.86 / 9.29 | 9.29 / 10.0 | 8.57 / 7.86 | 8.69 | 8.95 | 8.82 |
| [EXAONE-3.0-7.8B-Instruct](https://huggingface.co/LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct)| 6.86 / 7.71 | 8.57 / 6.71 | 10.0 / 9.29 | 9.43 / 10.0 | 10.0 / 10.0 | 9.57 / 5.14 | 9.07 | 8.14 | 8.61 |
| [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)| 4.29 / 4.86 | 6.43 / 6.57 | 6.71 / 5.14 | 6.57 / 6.00 | 4.29 / 4.14 | 6.00 / 4.00 | 5.71 | 5.12 | 5.42 |
| [Gemma-2-9B-Instruct](https://huggingface.co/google/gemma-2-9b-it)| 6.14 / 5.86 | 9.29 / 9.0 | 9.29 / 8.57 | 9.29 / 9.14 | 8.43 / 8.43 | 7.86 / 4.43 | 8.38 | 7.57 | 7.98
| [Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct)| 5.57 / 4.86 | 7.71 / 6.43 | 7.43 / 7.00 | 7.43 / 8.00 | 7.86 / 8.71 | 6.29 / 3.29 | 7.05 | 6.38 | 6.71 | |
1g0rrr/grab_lamp | 1g0rrr | 2024-10-11T07:12:00Z | 6 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"robotics",
"region:us"
] | robotics | 2024-10-11T07:11:51Z | ---
library_name: lerobot
tags:
- act
- model_hub_mixin
- pytorch_model_hub_mixin
- robotics
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: https://github.com/huggingface/lerobot
- Docs: [More Information Needed] |
mradermacher/CursorCore-Yi-1.5B-LC-GGUF | mradermacher | 2024-10-11T07:08:09Z | 22 | 0 | transformers | [
"transformers",
"gguf",
"code",
"en",
"base_model:TechxGenus/CursorCore-Yi-1.5B-LC",
"base_model:quantized:TechxGenus/CursorCore-Yi-1.5B-LC",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-11T06:10:54Z | ---
base_model: TechxGenus/CursorCore-Yi-1.5B-LC
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- code
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/TechxGenus/CursorCore-Yi-1.5B-LC
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/CursorCore-Yi-1.5B-LC-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-1.5B-LC-GGUF/resolve/main/CursorCore-Yi-1.5B-LC.Q2_K.gguf) | Q2_K | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-1.5B-LC-GGUF/resolve/main/CursorCore-Yi-1.5B-LC.Q3_K_S.gguf) | Q3_K_S | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-1.5B-LC-GGUF/resolve/main/CursorCore-Yi-1.5B-LC.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-1.5B-LC-GGUF/resolve/main/CursorCore-Yi-1.5B-LC.Q3_K_L.gguf) | Q3_K_L | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-1.5B-LC-GGUF/resolve/main/CursorCore-Yi-1.5B-LC.IQ4_XS.gguf) | IQ4_XS | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-1.5B-LC-GGUF/resolve/main/CursorCore-Yi-1.5B-LC.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-1.5B-LC-GGUF/resolve/main/CursorCore-Yi-1.5B-LC.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-1.5B-LC-GGUF/resolve/main/CursorCore-Yi-1.5B-LC.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-1.5B-LC-GGUF/resolve/main/CursorCore-Yi-1.5B-LC.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-1.5B-LC-GGUF/resolve/main/CursorCore-Yi-1.5B-LC.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-1.5B-LC-GGUF/resolve/main/CursorCore-Yi-1.5B-LC.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-1.5B-LC-GGUF/resolve/main/CursorCore-Yi-1.5B-LC.f16.gguf) | f16 | 3.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/CursorCore-Yi-1.5B-LC-i1-GGUF | mradermacher | 2024-10-11T07:08:06Z | 128 | 0 | transformers | [
"transformers",
"gguf",
"code",
"en",
"base_model:TechxGenus/CursorCore-Yi-1.5B-LC",
"base_model:quantized:TechxGenus/CursorCore-Yi-1.5B-LC",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-11T07:04:11Z | ---
base_model: TechxGenus/CursorCore-Yi-1.5B-LC
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- code
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/TechxGenus/CursorCore-Yi-1.5B-LC
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/CursorCore-Yi-1.5B-LC-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-1.5B-LC-i1-GGUF/resolve/main/CursorCore-Yi-1.5B-LC.i1-IQ1_S.gguf) | i1-IQ1_S | 0.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-1.5B-LC-i1-GGUF/resolve/main/CursorCore-Yi-1.5B-LC.i1-IQ1_M.gguf) | i1-IQ1_M | 0.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-1.5B-LC-i1-GGUF/resolve/main/CursorCore-Yi-1.5B-LC.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-1.5B-LC-i1-GGUF/resolve/main/CursorCore-Yi-1.5B-LC.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-1.5B-LC-i1-GGUF/resolve/main/CursorCore-Yi-1.5B-LC.i1-IQ2_S.gguf) | i1-IQ2_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-1.5B-LC-i1-GGUF/resolve/main/CursorCore-Yi-1.5B-LC.i1-IQ2_M.gguf) | i1-IQ2_M | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-1.5B-LC-i1-GGUF/resolve/main/CursorCore-Yi-1.5B-LC.i1-Q2_K.gguf) | i1-Q2_K | 0.7 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-1.5B-LC-i1-GGUF/resolve/main/CursorCore-Yi-1.5B-LC.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-1.5B-LC-i1-GGUF/resolve/main/CursorCore-Yi-1.5B-LC.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-1.5B-LC-i1-GGUF/resolve/main/CursorCore-Yi-1.5B-LC.i1-IQ3_S.gguf) | i1-IQ3_S | 0.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-1.5B-LC-i1-GGUF/resolve/main/CursorCore-Yi-1.5B-LC.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-1.5B-LC-i1-GGUF/resolve/main/CursorCore-Yi-1.5B-LC.i1-IQ3_M.gguf) | i1-IQ3_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-1.5B-LC-i1-GGUF/resolve/main/CursorCore-Yi-1.5B-LC.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-1.5B-LC-i1-GGUF/resolve/main/CursorCore-Yi-1.5B-LC.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-1.5B-LC-i1-GGUF/resolve/main/CursorCore-Yi-1.5B-LC.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-1.5B-LC-i1-GGUF/resolve/main/CursorCore-Yi-1.5B-LC.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 1.0 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-1.5B-LC-i1-GGUF/resolve/main/CursorCore-Yi-1.5B-LC.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 1.0 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-1.5B-LC-i1-GGUF/resolve/main/CursorCore-Yi-1.5B-LC.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 1.0 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-1.5B-LC-i1-GGUF/resolve/main/CursorCore-Yi-1.5B-LC.i1-Q4_0.gguf) | i1-Q4_0 | 1.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-1.5B-LC-i1-GGUF/resolve/main/CursorCore-Yi-1.5B-LC.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-1.5B-LC-i1-GGUF/resolve/main/CursorCore-Yi-1.5B-LC.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-1.5B-LC-i1-GGUF/resolve/main/CursorCore-Yi-1.5B-LC.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-1.5B-LC-i1-GGUF/resolve/main/CursorCore-Yi-1.5B-LC.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/CursorCore-Yi-1.5B-LC-i1-GGUF/resolve/main/CursorCore-Yi-1.5B-LC.i1-Q6_K.gguf) | i1-Q6_K | 1.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
RUC-AIBOX/policy_1011 | RUC-AIBOX | 2024-10-11T07:07:11Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-11T06:11:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Saideepthi55/sentencetransformer_ftmodel_on_chemical_dataset | Saideepthi55 | 2024-10-11T07:03:16Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:117502",
"loss:MultipleNegativesRankingLoss",
"en",
"dataset:SwastikN/sxc_med_llm_chemical_gen",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:microsoft/mpnet-base",
"base_model:finetune:microsoft/mpnet-base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-10-11T07:02:49Z | ---
base_model: microsoft/mpnet-base
datasets:
- SwastikN/sxc_med_llm_chemical_gen
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy
- dot_accuracy
- manhattan_accuracy
- euclidean_accuracy
- max_accuracy
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:117502
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Help me make the molecule CC(=O)OC[C@H](OC(C)=O)C(=O)N1CCCC[C@H]1C1CCN(C(=O)c2cc3ccccc3n2C)CC1
with the same hydrogen bond donors. The output molecule should be similar to the
input molecule. Please inform me of the number of hydrogen bond donor(s) of the
optimized molecule.
sentences:
- Your requirements guided the optimization, resulting in the molecule "CC(=O)OC(CCl)C(Cc1cccs1)[C@H](OC(C)=O)C(=O)N1CCCC[C@H]1C1CCN(C(=O)c2cc3ccccc3n2C)CC1"
with an approximate hydrogen bond donor(s) of 0.
- Given a molecule expressed in SMILES string, help me optimize it according to
my requirements.
- Help me adapt a molecular structure denoted in SMILES string based on my preferences.
- source_sentence: How can we modify the molecule CCC(CC)=C(CC)c1ccccc1OC(=O)OC(N=[N+]=[N-])c1ccccc1
to decrease its blood-brain barrier penetration (BBBP) value while keeping it
similar to the input molecule? Please inform me of the BBBP value of the optimized
molecule.
sentences:
- Describe a technology used for measuring people's emotional responses.
- I've successfully optimized the molecule according to your needs, resulting in
"CCOC(=O)c1ccccc1OC(=O)OC(N=[N+]=[N-])c1ccccc1" with an approximate BBBP value
of 0.71.
- Given a molecule expressed in SMILES string, help me optimize it according to
my requirements.
- source_sentence: How can we modify the molecule C/C(=C/C(=O)N1CC[C@H](CC(CCCCCC(CO)C(=O)O)NC(=O)OC(C)(C)C)[C@H]1c1cccnc1)C(=O)O
to increase its blood-brain barrier penetration (BBBP) value while keeping it
similar to the input molecule?
sentences:
- Given a molecule expressed in SMILES string, help me optimize it according to
my requirements.
- Aid me in refining a molecular structure written in SMILES notation based on my
criteria.
- Taking your requirements into account, I've optimized the molecule to "C/C(=C/C(=O)N1CC[C@H](CNC(=O)[C@H](CO)NC(=O)OC(C)(C)C)[C@H]1c1cccnc1)C(=O)O".
- source_sentence: Support me in transforming the molecule [SMILES] by incorporating
the same hydrogen bond acceptors and maintaining its resemblance to the original
molecule.
sentences:
- Taking your requirements into account, I've optimized the molecule to "CCOc1cccc(C2c3c(oc4ccc(C)cc4c3=O)C(=O)N2CCN(CC)CC)c1".
- Help me adapt a molecular structure denoted in SMILES string based on my preferences.
- Help me adapt a molecular structure denoted in SMILES string based on my preferences.
- source_sentence: With a molecule represented by the SMILES string CNNNCC(=O)N[C@H](C)C[C@@H](C)NCc1ccc2c(c1)CCC2,
propose adjustments that can increase its logP value while keeping the output
molecule structurally related to the input molecule.
sentences:
- Aid me in refining a molecular structure written in SMILES notation based on my
criteria.
- Given a molecule expressed in SMILES string, help me optimize it according to
my requirements.
- In line with your criteria, I've optimized the molecule and present it as "C[C@H](C[C@@H](C)NC(=O)COC(C)(C)C)NCc1ccc2c(c1)CCC2".
model-index:
- name: MPNet base trained on sxc_med_llm_chemical_gen
results:
- task:
type: triplet
name: Triplet
dataset:
name: sxc med llm chemical gen
type: sxc_med_llm_chemical_gen
metrics:
- type: cosine_accuracy
value: 0.9837777777777778
name: Cosine Accuracy
- type: dot_accuracy
value: 0.01622222222222222
name: Dot Accuracy
- type: manhattan_accuracy
value: 0.9826666666666667
name: Manhattan Accuracy
- type: euclidean_accuracy
value: 0.9835555555555555
name: Euclidean Accuracy
- type: max_accuracy
value: 0.9837777777777778
name: Max Accuracy
---
# MPNet base trained on sxc_med_llm_chemical_gen
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) on the [sxc_med_llm_chemical_gen](https://huggingface.co/datasets/SwastikN/sxc_med_llm_chemical_gen) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) <!-- at revision 6996ce1e91bd2a9c7d7f61daec37463394f73f09 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [sxc_med_llm_chemical_gen](https://huggingface.co/datasets/SwastikN/sxc_med_llm_chemical_gen)
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Saideepthi55/sentencetransformer_ftmodel_on_chemical_dataset")
# Run inference
sentences = [
'With a molecule represented by the SMILES string CNNNCC(=O)N[C@H](C)C[C@@H](C)NCc1ccc2c(c1)CCC2, propose adjustments that can increase its logP value while keeping the output molecule structurally related to the input molecule.',
'Given a molecule expressed in SMILES string, help me optimize it according to my requirements.',
'In line with your criteria, I\'ve optimized the molecule and present it as "C[C@H](C[C@@H](C)NC(=O)COC(C)(C)C)NCc1ccc2c(c1)CCC2".',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Dataset: `sxc_med_llm_chemical_gen`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:-------------------|:-----------|
| cosine_accuracy | 0.9838 |
| dot_accuracy | 0.0162 |
| manhattan_accuracy | 0.9827 |
| euclidean_accuracy | 0.9836 |
| **max_accuracy** | **0.9838** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### sxc_med_llm_chemical_gen
* Dataset: [sxc_med_llm_chemical_gen](https://huggingface.co/datasets/SwastikN/sxc_med_llm_chemical_gen) at [ae28874](https://huggingface.co/datasets/SwastikN/sxc_med_llm_chemical_gen/tree/ae28874de7084969f9aa310b357ac699dbbafa23)
* Size: 117,502 training samples
* Columns: <code>input</code>, <code>instruction</code>, and <code>output</code>
* Approximate statistics based on the first 1000 samples:
| | input | instruction | output |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 2 tokens</li><li>mean: 50.03 tokens</li><li>max: 156 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 16.19 tokens</li><li>max: 69 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 67.94 tokens</li><li>max: 321 tokens</li></ul> |
* Samples:
| input | instruction | output |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code></code> | <code>List the countries of the Middle East</code> | <code>Egypt, Israel, Iran, Iraq, Jordan, Kuwait, Lebanon, Qatar, Saudi Arabia, Syria, Turkey, The United Arab Emirates, Oman, and Yemen.</code> |
| <code>Support me in refining the molecule CCC(C)=C(CC)c1ccc(Nc2nc3nonc3nc2N2CCC(Oc3ccc(Cl)cc3)CC2)cc1 to have a more drug-like nature, ensuring it remains close to the input molecule. Please inform me of the QED value of the optimized molecule.</code> | <code>Help me adapt a molecular structure denoted in SMILES string based on my preferences.</code> | <code>I've optimized the molecule according to your requirements, yielding the structure "COC(=O)c1ccc(Nc2nc3nonc3nc2N2CCC(Oc3ccc(Cl)cc3)CC2)cc1" and an estimated QED value of 0.4.</code> |
| <code></code> | <code>Create a social media strategy to attract followers on Twitter.</code> | <code>The social media strategy for Twitter should involve Tweeting on a regular basis, with content that is engaging and relevant to the target audience. This can include retweetable quotes, links to videos and articles, sharing photos/images, polls, and so on. It is also beneficial for the information to include hashtags relevant to the content and any relevant topics of the industry. <br><br>It is also important to leverage Twitter influencers and accounts, as well as engaging with potential followers regularly and DM them when necessary. Additionally, running campaigns, contests and giveaways to gain more followers should also be part of the strategy. Finally, hashtag research should be conducted to increase the reach of the tweets.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### sxc_med_llm_chemical_gen
* Dataset: [sxc_med_llm_chemical_gen](https://huggingface.co/datasets/SwastikN/sxc_med_llm_chemical_gen) at [ae28874](https://huggingface.co/datasets/SwastikN/sxc_med_llm_chemical_gen/tree/ae28874de7084969f9aa310b357ac699dbbafa23)
* Size: 4,500 evaluation samples
* Columns: <code>input</code>, <code>instruction</code>, and <code>output</code>
* Approximate statistics based on the first 1000 samples:
| | input | instruction | output |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 2 tokens</li><li>mean: 68.49 tokens</li><li>max: 165 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 16.89 tokens</li><li>max: 36 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 69.16 tokens</li><li>max: 205 tokens</li></ul> |
* Samples:
| input | instruction | output |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>With a molecule represented by the SMILES string COc1ccc(CN(C(=O)Nc2ccc([C@H](C#N)c3ccccc3)c(Cl)c2)C2CC2)cc1, propose adjustments that can decrease its blood-brain barrier permeability (BBBP) value by at least 0.1 compared to the pre-optimized value while keeping the output molecule structurally related to the input molecule. Please inform me of the BBBP value of the optimized molecule.</code> | <code>Help me adapt a molecular structure denoted in SMILES string based on my preferences.</code> | <code>I've optimized the molecule according to your requirements, yielding the structure "CC(C)c1cccc(Oc2ccc(CN(C(=O)Nc3ccc([C@H](C#N)c4ccccc4)c(Cl)c3)C3CC3)cc2)c1" and an estimated BBBP value of 0.34.</code> |
| <code>Which modifications can be applied to the molecule Cc1nn(-c2cc3cc(F)ccc3s2)c2ncc(C(=O)N3CCCC3c3nnc4n3CCCCC4)cc12 to increase its water solubility value while ensuring it remains similar to the original molecule?</code> | <code>Aid me in refining a molecular structure written in SMILES notation based on my criteria.</code> | <code>I've optimized the molecule according to your requirements, resulting in the following structure: "Cc1nn(C)c2ncc(C(=O)N3CCCC3c3nnc4n3CCCCC4)cc12".</code> |
| <code></code> | <code>Describe the benefits of having a mentor.</code> | <code>Having a mentor can provide tremendous benefits, including guidance, advice, support and connection to a network of contacts. Mentors can offer valuable feedback, help to create goals and strategies, and provide the accountability necessary to carry out the desired goals. They can also provide a fresh perspective which can help to create new ideas and solutions.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | sxc_med_llm_chemical_gen_max_accuracy |
|:------:|:----:|:-------------:|:---------------:|:-------------------------------------:|
| 0 | 0 | - | - | 0.7076 |
| 0.0136 | 100 | 4.1705 | 3.6314 | 0.7469 |
| 0.0272 | 200 | 3.0088 | 2.4771 | 0.8191 |
| 0.0408 | 300 | 2.3803 | 2.3765 | 0.8716 |
| 0.0545 | 400 | 2.2281 | 2.3122 | 0.9316 |
| 0.0681 | 500 | 2.1647 | 2.2997 | 0.9436 |
| 0.0817 | 600 | 2.1289 | 2.2663 | 0.9511 |
| 0.0953 | 700 | 2.0663 | 2.2601 | 0.9629 |
| 0.1089 | 800 | 2.065 | 2.2500 | 0.9687 |
| 0.1225 | 900 | 2.0399 | 2.2595 | 0.9693 |
| 0.1362 | 1000 | 1.9939 | 2.2375 | 0.9707 |
| 0.1498 | 1100 | 1.9858 | 2.2220 | 0.9684 |
| 0.1634 | 1200 | 2.0069 | 2.2265 | 0.9758 |
| 0.1770 | 1300 | 1.9591 | 2.2073 | 0.9702 |
| 0.1906 | 1400 | 1.9288 | 2.2078 | 0.976 |
| 0.2042 | 1500 | 1.9466 | 2.2036 | 0.9758 |
| 0.2179 | 1600 | 1.9175 | 2.2086 | 0.9764 |
| 0.2315 | 1700 | 1.8835 | 2.2329 | 0.9796 |
| 0.2451 | 1800 | 1.9134 | 2.2003 | 0.9796 |
| 0.2587 | 1900 | 1.8809 | 2.2003 | 0.9811 |
| 0.2723 | 2000 | 1.9263 | 2.2039 | 0.9824 |
| 0.2859 | 2100 | 1.9101 | 2.2084 | 0.9804 |
| 0.2996 | 2200 | 1.8846 | 2.2052 | 0.9831 |
| 0.3132 | 2300 | 1.8842 | 2.1903 | 0.9818 |
| 0.3268 | 2400 | 1.8945 | 2.1984 | 0.9807 |
| 0.3404 | 2500 | 1.9217 | 2.1859 | 0.9829 |
| 0.3540 | 2600 | 1.8704 | 2.1995 | 0.982 |
| 0.3676 | 2700 | 1.889 | 2.2038 | 0.9822 |
| 0.3813 | 2800 | 1.875 | 2.2079 | 0.9829 |
| 0.3949 | 2900 | 1.8792 | 2.1975 | 0.9833 |
| 0.4085 | 3000 | 1.882 | 2.1895 | 0.9796 |
| 0.4221 | 3100 | 1.8886 | 2.2115 | 0.9831 |
| 0.4357 | 3200 | 1.8629 | 2.2040 | 0.9838 |
| 0.4493 | 3300 | 1.8647 | 2.1973 | 0.9836 |
| 0.4630 | 3400 | 1.8888 | 2.1961 | 0.9838 |
| 0.4766 | 3500 | 1.8692 | 2.2027 | 0.9829 |
| 0.4902 | 3600 | 1.8846 | 2.1954 | 0.9838 |
| 0.5038 | 3700 | 1.8482 | 2.1888 | 0.9822 |
| 0.5174 | 3800 | 1.8527 | 2.1873 | 0.9824 |
| 0.5310 | 3900 | 1.8378 | 2.1940 | 0.9811 |
| 0.5447 | 4000 | 1.8679 | 2.2008 | 0.9833 |
| 0.5583 | 4100 | 1.8421 | 2.1845 | 0.9842 |
| 0.5719 | 4200 | 1.8325 | 2.1948 | 0.9847 |
| 0.5855 | 4300 | 1.8675 | 2.1750 | 0.9836 |
| 0.5991 | 4400 | 1.8483 | 2.1828 | 0.9831 |
| 0.6127 | 4500 | 1.854 | 2.1886 | 0.9831 |
| 0.6264 | 4600 | 1.827 | 2.1876 | 0.9824 |
| 0.6400 | 4700 | 1.8863 | 2.1849 | 0.9836 |
| 0.6536 | 4800 | 1.8919 | 2.1816 | 0.984 |
| 0.6672 | 4900 | 1.8211 | 2.1830 | 0.9847 |
| 0.6808 | 5000 | 1.8345 | 2.1847 | 0.9842 |
| 0.6944 | 5100 | 1.8685 | 2.1855 | 0.9853 |
| 0.7081 | 5200 | 1.85 | 2.1864 | 0.9844 |
| 0.7217 | 5300 | 1.8222 | 2.1875 | 0.9842 |
| 0.7353 | 5400 | 1.8179 | 2.1923 | 0.9844 |
| 0.7489 | 5500 | 1.7992 | 2.1909 | 0.9851 |
| 0.7625 | 5600 | 1.8495 | 2.1811 | 0.9847 |
| 0.7761 | 5700 | 1.808 | 2.1763 | 0.9842 |
| 0.7898 | 5800 | 1.8293 | 2.1861 | 0.9849 |
| 0.8034 | 5900 | 1.8184 | 2.1845 | 0.9851 |
| 0.8170 | 6000 | 1.8256 | 2.1956 | 0.9849 |
| 0.8306 | 6100 | 1.7904 | 2.1916 | 0.9842 |
| 0.8442 | 6200 | 1.8028 | 2.1918 | 0.9847 |
| 0.8578 | 6300 | 1.8316 | 2.1917 | 0.9856 |
| 0.8715 | 6400 | 1.7951 | 2.1929 | 0.9851 |
| 0.8851 | 6500 | 1.8175 | 2.1866 | 0.9847 |
| 0.8987 | 6600 | 1.8071 | 2.1899 | 0.9853 |
| 0.9123 | 6700 | 1.8632 | 2.1905 | 0.9844 |
| 0.9259 | 6800 | 1.8441 | 2.1885 | 0.984 |
| 0.9395 | 6900 | 1.8243 | 2.1865 | 0.9836 |
| 0.9532 | 7000 | 1.8055 | 2.1852 | 0.9842 |
| 0.9668 | 7100 | 1.8227 | 2.1843 | 0.984 |
| 0.9804 | 7200 | 1.8287 | 2.1831 | 0.984 |
| 0.9940 | 7300 | 1.8379 | 2.1838 | 0.9838 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.2.0
- Transformers: 4.44.2
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets: 3.0.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
iokimao/livetech02 | iokimao | 2024-10-11T07:00:06Z | 5 | 0 | null | [
"gguf",
"llama",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-09-27T13:11:54Z | ---
license: apache-2.0
---
|
KSU-HW-SEC/Power | KSU-HW-SEC | 2024-10-11T06:53:28Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-11T06:50:14Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
QuantFactory/functionary-small-v2.2-GGUF | QuantFactory | 2024-10-11T06:53:20Z | 15 | 2 | transformers | [
"transformers",
"gguf",
"text-generation",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-10-11T06:17:18Z |
---
pipeline_tag: text-generation
base_model: functionary-small-v2.2
library_name: transformers
---
[](https://hf.co/QuantFactory)
# QuantFactory/functionary-small-v2.2-GGUF
This is quantized version of [meetkai/functionary-small-v2.2](https://huggingface.co/meetkai/functionary-small-v2.2) created using llama.cpp
# Original Model Card
# Model Card for functionary-small-v2.2
[https://github.com/MeetKai/functionary](https://github.com/MeetKai/functionary)

Functionary is a language model that can interpret and execute functions/plugins.
The model determines when to execute functions, whether in parallel or serially, and can understand their outputs. It only triggers functions as needed. Function definitions are given as JSON Schema Objects, similar to OpenAI GPT function calls.
## Key Features
- Intelligent **parallel tool use**
- Able to analyze functions/tools outputs and provide relevant responses **grounded in the outputs**
- Able to decide **when to not use tools/call functions** and provide normal chat response
- Truly one of the best open-source alternative to GPT-4
## Performance
Our model achieves achieves state-of-the-art performance in Function Calling Accuracy on our in-house dataset. The accuracy metric measures the overall correctness of predicted function calls, including function name prediction and arguments extraction.

| Dataset | Model Name | Function Calling Accuracy (Name & Arguments) |
| :-------------| :-------------------| ---------------------------: |
| In-house data | MeetKai-functionary-small-v2.2 | 0.546|
| In-house data | MeetKai-functionary-medium-v2.2 | **0.664**|
| In-house data | OpenAI-gpt-3.5-turbo-1106 | 0.531 |
| In-house data | OpenAI-gpt-4-1106-preview | 0.737 |
## Prompt Template
We use a specially designed prompt template which we call "v2PromptTemplate" that breaks down each turns into from, recipient and content portions.
We convert function definitions to a similar text to TypeScript definitions. Then we inject these definitions as system prompts. After that, we inject the default system prompt. Then we start the conversation messages.
This formatting is also available via our vLLM server which we process the functions into Typescript definitions encapsulated in a system message and use a pre-defined Transformers chat template. This means that lists of messages can be formatted for you with the apply_chat_template() method within our server:
```python
from openai import OpenAI
client = OpenAI(base_url="http://localhost:8000/v1", api_key="functionary")
client.chat.completions.create(
model="path/to/functionary/model/",
messages=[{"role": "user",
"content": "What is the weather for Istanbul?"}
],
tools=[{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
}
},
"required": ["location"]
}
}
}],
tool_choice="auto"
)
```
will yield:
```
<|from|>system
<|recipient|>all
<|content|>// Supported function definitions that should be called when necessary.
namespace functions {
// Get the current weather
type get_current_weather = (_: {
// The city and state, e.g. San Francisco, CA
location: string,
}) => any;
} // namespace functions
<|from|>system
<|recipient|>all
<|content|>A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. The assistant calls functions with appropriate input when necessary
<|from|>user
<|recipient|>all
<|content|>What is the weather for Istanbul?
```
A more detailed example is provided [here](https://github.com/MeetKai/functionary/blob/main/tests/prompt_test_v2.txt).
## Run the model
We encourage users to run our models using our OpenAI-compatible vLLM server [here](https://github.com/MeetKai/functionary).
# The MeetKai Team

|
KSU-HW-SEC/Power_SFT | KSU-HW-SEC | 2024-10-11T06:49:24Z | 11 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-11T06:46:05Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
borno1/WSDD_x | borno1 | 2024-10-11T06:42:18Z | 29 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-10-11T06:41:10Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
---
### tecaher-sdv1-5-epochs-8000 on Stable Diffusion via Dreambooth
#### model by borno1
This your the Stable Diffusion model fine-tuned the tecaher-sdv1-5-epochs-8000 concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **flower**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:
|
CheeLi03/whisper-ar | CheeLi03 | 2024-10-11T06:42:15Z | 5 | 0 | null | [
"tensorboard",
"safetensors",
"whisper",
"hf-asr-leaderboard",
"generated_from_trainer",
"ar",
"dataset:fleurs",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2024-10-11T03:57:57Z | ---
base_model: openai/whisper-small
datasets:
- fleurs
language:
- ar
license: apache-2.0
metrics:
- wer
tags:
- hf-asr-leaderboard
- generated_from_trainer
model-index:
- name: Whisper Small - Chee Li
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Google Fleurs
type: fleurs
config: ar_eg
split: None
args: 'config: ar split: test'
metrics:
- type: wer
value: 35.462500000000006
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small - Chee Li
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Google Fleurs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4791
- Wer: 35.4625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:-------:|
| 0.0147 | 6.6667 | 1000 | 0.3868 | 34.5125 |
| 0.0009 | 13.3333 | 2000 | 0.4417 | 36.6375 |
| 0.0004 | 20.0 | 3000 | 0.4693 | 35.5625 |
| 0.0003 | 26.6667 | 4000 | 0.4791 | 35.4625 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.3.1+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
KSU-HW-SEC/Area_Time_SFT | KSU-HW-SEC | 2024-10-11T06:41:04Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-11T06:37:16Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
KSU-HW-SEC/Area | KSU-HW-SEC | 2024-10-11T06:36:24Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-11T06:33:05Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bky373/spring-assistant | bky373 | 2024-10-11T06:34:54Z | 5 | 0 | null | [
"safetensors",
"gemma",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-10-03T21:38:29Z | ---
license: apache-2.0
---
## Spring Assistant
- https://github.com/bky373/spring-assistant?tab=readme-ov-file#spring-assistant |
mradermacher/Llama3.2-3B-Anvita-Instruct-GGUF | mradermacher | 2024-10-11T06:32:06Z | 12 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:sethuiyer/Llama3.2-3B-Anvita-Instruct",
"base_model:quantized:sethuiyer/Llama3.2-3B-Anvita-Instruct",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-11T06:24:33Z | ---
base_model: sethuiyer/Llama3.2-3B-Anvita-Instruct
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/sethuiyer/Llama3.2-3B-Anvita-Instruct
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama3.2-3B-Anvita-Instruct-GGUF/resolve/main/Llama3.2-3B-Anvita-Instruct.Q2_K.gguf) | Q2_K | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2-3B-Anvita-Instruct-GGUF/resolve/main/Llama3.2-3B-Anvita-Instruct.Q3_K_S.gguf) | Q3_K_S | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2-3B-Anvita-Instruct-GGUF/resolve/main/Llama3.2-3B-Anvita-Instruct.Q3_K_M.gguf) | Q3_K_M | 2.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2-3B-Anvita-Instruct-GGUF/resolve/main/Llama3.2-3B-Anvita-Instruct.Q3_K_L.gguf) | Q3_K_L | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2-3B-Anvita-Instruct-GGUF/resolve/main/Llama3.2-3B-Anvita-Instruct.IQ4_XS.gguf) | IQ4_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2-3B-Anvita-Instruct-GGUF/resolve/main/Llama3.2-3B-Anvita-Instruct.Q4_K_S.gguf) | Q4_K_S | 2.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2-3B-Anvita-Instruct-GGUF/resolve/main/Llama3.2-3B-Anvita-Instruct.Q4_K_M.gguf) | Q4_K_M | 2.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2-3B-Anvita-Instruct-GGUF/resolve/main/Llama3.2-3B-Anvita-Instruct.Q5_K_S.gguf) | Q5_K_S | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2-3B-Anvita-Instruct-GGUF/resolve/main/Llama3.2-3B-Anvita-Instruct.Q5_K_M.gguf) | Q5_K_M | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2-3B-Anvita-Instruct-GGUF/resolve/main/Llama3.2-3B-Anvita-Instruct.Q6_K.gguf) | Q6_K | 3.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2-3B-Anvita-Instruct-GGUF/resolve/main/Llama3.2-3B-Anvita-Instruct.Q8_0.gguf) | Q8_0 | 3.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.2-3B-Anvita-Instruct-GGUF/resolve/main/Llama3.2-3B-Anvita-Instruct.f16.gguf) | f16 | 7.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
pranavajay/animated | pranavajay | 2024-10-11T06:29:05Z | 6 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"fluxpipeline",
"flux",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-10-11T06:26:23Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- fluxpipeline
- flux
base_model: black-forest-labs/FLUX.1-dev
license: creativeml-openrail-m
pipeline_tag: text-to-image
--- |
Rasooli/Bert-Sentiment-Fa | Rasooli | 2024-10-11T06:29:01Z | 21 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:dadashzadeh/roberta-sentiment-persian",
"base_model:finetune:dadashzadeh/roberta-sentiment-persian",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-09-05T04:06:26Z | ---
library_name: transformers
license: apache-2.0
base_model: dadashzadeh/roberta-sentiment-persian
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: Bert-Sentiment-Fa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bert-Sentiment-Fa
This model is a fine-tuned version of [dadashzadeh/roberta-sentiment-persian](https://huggingface.co/dadashzadeh/roberta-sentiment-persian) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2509
- Accuracy: 0.8333
- F1: 0.8213
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 270 | 0.4689 | 0.8208 | 0.8081 |
| 0.4419 | 2.0 | 540 | 0.7073 | 0.8042 | 0.7840 |
| 0.4419 | 3.0 | 810 | 0.7497 | 0.8042 | 0.7820 |
| 0.1823 | 4.0 | 1080 | 0.9258 | 0.8167 | 0.7981 |
| 0.1823 | 5.0 | 1350 | 1.0813 | 0.8042 | 0.7897 |
| 0.0742 | 6.0 | 1620 | 1.1488 | 0.8042 | 0.7866 |
| 0.0742 | 7.0 | 1890 | 1.2846 | 0.8167 | 0.8023 |
| 0.0311 | 8.0 | 2160 | 1.2308 | 0.8333 | 0.8193 |
| 0.0311 | 9.0 | 2430 | 1.2446 | 0.8333 | 0.8193 |
| 0.0108 | 10.0 | 2700 | 1.2509 | 0.8333 | 0.8213 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
ojigrande/ppo-Huggy | ojigrande | 2024-10-11T06:18:57Z | 11 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2024-10-11T06:18:20Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ojigrande/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
leenag/whisper-small-poula | leenag | 2024-10-11T06:13:34Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"pmx",
"dataset:iitd-duk/paula",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-10-11T04:45:43Z | ---
library_name: transformers
language:
- pmx
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- iitd-duk/paula
metrics:
- wer
model-index:
- name: Whisper-Small-paula
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Paula
type: iitd-duk/paula
metrics:
- name: Wer
type: wer
value: 97.75910364145658
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper-Small-paula
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Paula dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7488
- Wer: 97.7591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1 | 5.0 | 100 | 2.4980 | 131.0924 |
| 0.0249 | 10.0 | 200 | 2.5853 | 97.7591 |
| 0.0065 | 15.0 | 300 | 2.6842 | 98.0392 |
| 0.0026 | 20.0 | 400 | 2.7265 | 96.9188 |
| 0.0016 | 25.0 | 500 | 2.7488 | 97.7591 |
### Framework versions
- Transformers 4.46.0.dev0
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
QuantFactory/OpenHathi-7B-Hi-v0.1-Base-GGUF | QuantFactory | 2024-10-11T05:59:21Z | 52 | 1 | null | [
"gguf",
"hi",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-10-11T05:25:02Z |
---
license: llama2
language:
- hi
---
[](https://hf.co/QuantFactory)
# QuantFactory/OpenHathi-7B-Hi-v0.1-Base-GGUF
This is quantized version of [sarvamai/OpenHathi-7B-Hi-v0.1-Base](https://huggingface.co/sarvamai/OpenHathi-7B-Hi-v0.1-Base) created using llama.cpp
# Original Model Card
This repository is the first model in the OpenHathi series of models that will be released by Sarvam AI. This is a 7B parameter, based on Llama2, trained on Hindi, English, and Hinglish. More details about the model, its training procedure, and evaluations can be found [here](https://www.sarvam.ai/blog/announcing-openhathi-series).
Note: this is a base model and not meant to be used as is. We recommend first finetuning it on task(s) you are interested in.
```
# Usage
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
tokenizer = LlamaTokenizer.from_pretrained('sarvamai/OpenHathi-7B-Hi-v0.1-Base')
model = LlamaForCausalLM.from_pretrained('sarvamai/OpenHathi-7B-Hi-v0.1-Base', torch_dtype=torch.bfloat16)
prompt = "मैं एक अच्छा हाथी हूँ"
inputs = tokenizer(prompt, return_tensors="pt")
# Generate
generate_ids = model.generate(inputs.input_ids, max_length=30)
tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
```
|
quancute/phobert-base-v2-ner-ghtk-ai-fluent-segmented-21-label-new-data-3090-11Obt-1 | quancute | 2024-10-11T05:52:36Z | 128 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-10-11T05:46:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/Infinirc_-_Llama-3.1-Infinirc-8B-Instruct-gguf | RichardErkhov | 2024-10-11T05:40:38Z | 7 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-11T03:15:50Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3.1-Infinirc-8B-Instruct - GGUF
- Model creator: https://huggingface.co/Infinirc/
- Original model: https://huggingface.co/Infinirc/Llama-3.1-Infinirc-8B-Instruct/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama-3.1-Infinirc-8B-Instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/Infinirc_-_Llama-3.1-Infinirc-8B-Instruct-gguf/blob/main/Llama-3.1-Infinirc-8B-Instruct.Q2_K.gguf) | Q2_K | 2.96GB |
| [Llama-3.1-Infinirc-8B-Instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Infinirc_-_Llama-3.1-Infinirc-8B-Instruct-gguf/blob/main/Llama-3.1-Infinirc-8B-Instruct.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [Llama-3.1-Infinirc-8B-Instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Infinirc_-_Llama-3.1-Infinirc-8B-Instruct-gguf/blob/main/Llama-3.1-Infinirc-8B-Instruct.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [Llama-3.1-Infinirc-8B-Instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Infinirc_-_Llama-3.1-Infinirc-8B-Instruct-gguf/blob/main/Llama-3.1-Infinirc-8B-Instruct.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [Llama-3.1-Infinirc-8B-Instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Infinirc_-_Llama-3.1-Infinirc-8B-Instruct-gguf/blob/main/Llama-3.1-Infinirc-8B-Instruct.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [Llama-3.1-Infinirc-8B-Instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/Infinirc_-_Llama-3.1-Infinirc-8B-Instruct-gguf/blob/main/Llama-3.1-Infinirc-8B-Instruct.Q3_K.gguf) | Q3_K | 3.74GB |
| [Llama-3.1-Infinirc-8B-Instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Infinirc_-_Llama-3.1-Infinirc-8B-Instruct-gguf/blob/main/Llama-3.1-Infinirc-8B-Instruct.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [Llama-3.1-Infinirc-8B-Instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Infinirc_-_Llama-3.1-Infinirc-8B-Instruct-gguf/blob/main/Llama-3.1-Infinirc-8B-Instruct.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [Llama-3.1-Infinirc-8B-Instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Infinirc_-_Llama-3.1-Infinirc-8B-Instruct-gguf/blob/main/Llama-3.1-Infinirc-8B-Instruct.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [Llama-3.1-Infinirc-8B-Instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/Infinirc_-_Llama-3.1-Infinirc-8B-Instruct-gguf/blob/main/Llama-3.1-Infinirc-8B-Instruct.Q4_0.gguf) | Q4_0 | 4.34GB |
| [Llama-3.1-Infinirc-8B-Instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Infinirc_-_Llama-3.1-Infinirc-8B-Instruct-gguf/blob/main/Llama-3.1-Infinirc-8B-Instruct.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [Llama-3.1-Infinirc-8B-Instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Infinirc_-_Llama-3.1-Infinirc-8B-Instruct-gguf/blob/main/Llama-3.1-Infinirc-8B-Instruct.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [Llama-3.1-Infinirc-8B-Instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/Infinirc_-_Llama-3.1-Infinirc-8B-Instruct-gguf/blob/main/Llama-3.1-Infinirc-8B-Instruct.Q4_K.gguf) | Q4_K | 4.58GB |
| [Llama-3.1-Infinirc-8B-Instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Infinirc_-_Llama-3.1-Infinirc-8B-Instruct-gguf/blob/main/Llama-3.1-Infinirc-8B-Instruct.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [Llama-3.1-Infinirc-8B-Instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/Infinirc_-_Llama-3.1-Infinirc-8B-Instruct-gguf/blob/main/Llama-3.1-Infinirc-8B-Instruct.Q4_1.gguf) | Q4_1 | 4.78GB |
| [Llama-3.1-Infinirc-8B-Instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/Infinirc_-_Llama-3.1-Infinirc-8B-Instruct-gguf/blob/main/Llama-3.1-Infinirc-8B-Instruct.Q5_0.gguf) | Q5_0 | 5.21GB |
| [Llama-3.1-Infinirc-8B-Instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Infinirc_-_Llama-3.1-Infinirc-8B-Instruct-gguf/blob/main/Llama-3.1-Infinirc-8B-Instruct.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [Llama-3.1-Infinirc-8B-Instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/Infinirc_-_Llama-3.1-Infinirc-8B-Instruct-gguf/blob/main/Llama-3.1-Infinirc-8B-Instruct.Q5_K.gguf) | Q5_K | 5.34GB |
| [Llama-3.1-Infinirc-8B-Instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Infinirc_-_Llama-3.1-Infinirc-8B-Instruct-gguf/blob/main/Llama-3.1-Infinirc-8B-Instruct.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [Llama-3.1-Infinirc-8B-Instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/Infinirc_-_Llama-3.1-Infinirc-8B-Instruct-gguf/blob/main/Llama-3.1-Infinirc-8B-Instruct.Q5_1.gguf) | Q5_1 | 5.65GB |
| [Llama-3.1-Infinirc-8B-Instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/Infinirc_-_Llama-3.1-Infinirc-8B-Instruct-gguf/blob/main/Llama-3.1-Infinirc-8B-Instruct.Q6_K.gguf) | Q6_K | 6.14GB |
| [Llama-3.1-Infinirc-8B-Instruct.Q8_0.gguf](https://huggingface.co/RichardErkhov/Infinirc_-_Llama-3.1-Infinirc-8B-Instruct-gguf/blob/main/Llama-3.1-Infinirc-8B-Instruct.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
license: llama3.1
language:
- zh
- en
library_name: transformers
tags:
- zhtw
---
# Infinirc/Llama-3.1-Infinirc-8B-Instruct
## 模型詳情
**開發者**:陳昭儒[Infinirc.com](https://infinirc.com)
**模型版本**:1.0
**模型類型**:自然語言處理
**訓練數據源**:采用與台灣文化相關的資料集,包括台灣新聞、文學作品、網路文章等。
## 目的和用途
這款Llama3 8B模型是專門為了更好地理解和生成與台灣文化相關的文本而設計和微調的。目標是提供一個能夠捕捉台灣特有文化元素和語言習慣的強大語言模型,適用於文本生成、自動回答等多種應用。
## 模型架構
**基礎模型**:Llama3 8B
**調整策略**:對模型進行微調,使用與台灣文化相關的具體資料集進行微調,以增強模型對於本地化內容的理解和生成能力。
## 性能指標
該模型在一系列NLP基準測試中展示了優異的性能,特別是在文本生成和語意理解方面具有較高的準確度。
**具體性能數據**:(詳述BLEU分數、ROUGE分數等性能數據)
## 使用和限制
**使用建議**:推薦於需要深度理解或生成與台灣文化相關內容的應用場景中使用。
**限制**:雖然模型針對台灣文化進行了特定優化,但在處理非目標區域的文化內容時可能效果不佳。
## 風險與倫理考量
使用本模型時應注意確保生成的內容不包含歧視性或有害信息。模型的開發和使用應遵循倫理準則和社會責任。
## 聯絡方式
如有任何問題或需要進一步的信息,請透過下方聯絡方式與我們團隊聯繫:
Email: [[email protected]](mailto:[email protected])
網站: [https://infinirc.com](https://infinirc.com)
|
general-preference/SPPO-Llama-3-8B-Instruct-GPM-2B | general-preference | 2024-10-11T05:33:36Z | 6 | 1 | null | [
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"dataset:openbmb/UltraFeedback",
"arxiv:2410.02197",
"arxiv:2405.00675",
"license:apache-2.0",
"model-index",
"region:us"
] | text-generation | 2024-10-08T03:18:00Z | ---
language:
- en
license: apache-2.0
datasets:
- openbmb/UltraFeedback
pipeline_tag: text-generation
model-index:
- name: SPPO-Llama-3-8B-Instruct-GPM-2B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 60.24
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=general-preference/SPPO-Llama-3-8B-Instruct-GPM-2B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 27.89
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=general-preference/SPPO-Llama-3-8B-Instruct-GPM-2B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 8.01
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=general-preference/SPPO-Llama-3-8B-Instruct-GPM-2B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 1.23
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=general-preference/SPPO-Llama-3-8B-Instruct-GPM-2B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 3.19
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=general-preference/SPPO-Llama-3-8B-Instruct-GPM-2B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 29.53
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=general-preference/SPPO-Llama-3-8B-Instruct-GPM-2B
name: Open LLM Leaderboard
---
General Preference Modeling with Preference Representations for Aligning Language Models (https://arxiv.org/abs/2410.02197)
# SPPO-Llama-3-8B-Instruct-GPM-2B
This model was developed using [SPPO](https://arxiv.org/abs/2405.00675) at iteration 3 and the [General Preference representation Model (GPM)](https://arxiv.org/abs/2410.02197) (specifically, using [GPM-Gemma-2B](https://huggingface.co/general-preference/GPM-Gemma-2B)), based on the [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) architecture as starting point. We utilized the prompt sets from the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset, splited to 3 parts for 3 iterations by [snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset](https://huggingface.co/datasets/snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset). All responses used are synthetic.
## Links to Other Models
- [SPPO-Llama-3-8B-Instruct-GPM-2B](https://huggingface.co/general-preference/SPPO-Llama-3-8B-Instruct-GPM-2B)
- [GPO-Llama-3-8B-Instruct-GPM-2B](https://huggingface.co/general-preference/GPO-Llama-3-8B-Instruct-GPM-2B)
### Model Description
- Model type: A 8B parameter GPT-like model fine-tuned on synthetic datasets.
- Language(s) (NLP): Primarily English
- License: Apache-2.0
- Finetuned from model: meta-llama/Meta-Llama-3-8B-Instruct
## [AlpacaEval Leaderboard Evaluation Results](https://tatsu-lab.github.io/alpaca_eval/)
| Model | LC. Win Rate | Win Rate | Avg. Length |
|-------------------------------------------|:------------:|:--------:|:-----------:|
|[SPPO-Llama-3-8B-Instruct-GPM-2B](https://huggingface.co/general-preference/SPPO-Llama-3-8B-Instruct-GPM-2B) |35.30 | 45.44 | 2490
## [Open LLM Leaderboard Evaluation Results](https://github.com/EleutherAI/lm-evaluation-harness)
Results are reported by using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) v0.4.1
| | arc_challenge | truthfulqa_mc2 | winogrande | gsm8k | hellaswag | mmlu | average |
|--------|---------------|----------------|------------|-------|-----------|-------|---------|
|[SPPO-Llama-3-8B-Instruct-GPM-2B](https://huggingface.co/general-preference/SPPO-Llama-3-8B-Instruct-GPM-2B) | 62.03 | 52.95 | 76.56 | 75.36 | 78.57 | 65.66 | 68.52
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- eta: 1000
- per_device_train_batch_size: 8
- gradient_accumulation_steps: 1
- seed: 42
- distributed_type: deepspeed_zero3
- num_devices: 8
- optimizer: RMSProp
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_train_epochs: 6.0 (stop at epoch=1.0)
## Citation
```
@article{zhang2024general,
title={General Preference Modeling with Preference Representations for Aligning Language Models},
author={Zhang, Yifan and Zhang, Ge and Wu, Yue and Xu, Kangping and Gu, Quanquan},
journal={arXiv preprint arXiv:2410.02197},
year={2024}
}
```
|
QuantFactory/Flammades-Mistral-7B-GGUF | QuantFactory | 2024-10-11T05:08:12Z | 38 | 1 | transformers | [
"transformers",
"gguf",
"dataset:flammenai/Date-DPO-NoAsterisks",
"dataset:jondurbin/truthy-dpo-v0.1",
"base_model:nbeerbower/Hermes2-Gutenberg2-Mistral-7B",
"base_model:quantized:nbeerbower/Hermes2-Gutenberg2-Mistral-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-11T04:30:53Z |
---
library_name: transformers
base_model:
- nbeerbower/Hermes2-Gutenberg2-Mistral-7B
datasets:
- flammenai/Date-DPO-NoAsterisks
- jondurbin/truthy-dpo-v0.1
license: apache-2.0
---
[](https://hf.co/QuantFactory)
# QuantFactory/Flammades-Mistral-7B-GGUF
This is quantized version of [flammenai/Flammades-Mistral-7B](https://huggingface.co/flammenai/Flammades-Mistral-7B) created using llama.cpp
# Original Model Card
# Flammades-Mistral-7B
[nbeerbower/Hermes2-Gutenberg2-Mistral-7B](https://huggingface.co/nbeerbower/Hermes2-Gutenberg2-Mistral-7B) finetuned on [flammenai/Date-DPO-NoAsterisks](https://huggingface.co/datasets/flammenai/Date-DPO-NoAsterisks) and [jondurbin/truthy-dpo-v0.1](https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1).
### Method
[ORPO tuned](https://mlabonne.github.io/blog/posts/2024-04-19_Fine_tune_Llama_3_with_ORPO.html) with 2x RTX 3090 for 3 epochs.
|
zeon8985army/IndonesiaLukasLargeV3 | zeon8985army | 2024-10-11T05:05:10Z | 61 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-10-11T04:47:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sandeshrajx/bert-fraud-classification-test-mass-4 | sandeshrajx | 2024-10-11T04:53:32Z | 163 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:sandeshrajx/bert-fraud-classification-test-mass",
"base_model:finetune:sandeshrajx/bert-fraud-classification-test-mass",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-11T04:53:09Z | ---
library_name: transformers
license: apache-2.0
base_model: sandeshrajx/bert-fraud-classification-test-mass
tags:
- generated_from_trainer
metrics:
- f1
- precision
model-index:
- name: bert-fraud-classification-test-mass-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/sandeshrajx/ultron-nlp/runs/xnpaqmt6)
# bert-fraud-classification-test-mass-4
This model is a fine-tuned version of [sandeshrajx/bert-fraud-classification-test-mass](https://huggingface.co/sandeshrajx/bert-fraud-classification-test-mass) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3408
- F1: 0.8508
- Precision: 0.8627
- Val Accuracy: 0.8663
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 44
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 88
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Precision | Val Accuracy |
|:-------------:|:------:|:----:|:---------------:|:------:|:---------:|:------------:|
| 0.3874 | 0.1743 | 40 | 0.3197 | 0.8579 | 0.8936 | 0.8758 |
| 0.3614 | 0.3486 | 80 | 0.3427 | 0.8382 | 0.8846 | 0.8603 |
| 0.3563 | 0.5229 | 120 | 0.3505 | 0.8435 | 0.8468 | 0.8584 |
| 0.4263 | 0.6972 | 160 | 0.3407 | 0.8454 | 0.8589 | 0.8617 |
| 0.3514 | 0.8715 | 200 | 0.3473 | 0.8413 | 0.8421 | 0.8560 |
| 0.259 | 1.0458 | 240 | 0.3378 | 0.8417 | 0.9106 | 0.8663 |
| 0.3148 | 1.2200 | 280 | 0.3543 | 0.8479 | 0.8889 | 0.8679 |
| 0.2685 | 1.3943 | 320 | 0.3507 | 0.8501 | 0.9040 | 0.8715 |
| 0.2271 | 1.5686 | 360 | 0.3773 | 0.8406 | 0.8262 | 0.8526 |
| 0.376 | 1.7429 | 400 | 0.3412 | 0.8520 | 0.8731 | 0.8687 |
| 0.2739 | 1.9172 | 440 | 0.3408 | 0.8508 | 0.8627 | 0.8663 |
### Framework versions
- Transformers 4.46.0.dev0
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
leenag/whisper-small-paula | leenag | 2024-10-11T04:40:17Z | 77 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"pmx",
"dataset:iitd-duk/paula",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-08-20T08:17:59Z | ---
library_name: transformers
language:
- pmx
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- iitd-duk/paula
model-index:
- name: Whisper-Small-paula
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper-Small-paula
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Paula dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 500
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.45.0.dev0
- Pytorch 2.3.1+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
ariG23498/clip-vit-large-patch14-text-flax | ariG23498 | 2024-10-11T04:37:43Z | 1,248 | 2 | transformers | [
"transformers",
"safetensors",
"clip_text_model",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-10-11T03:49:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
objects76/ft-Llama-3.2-3B-Instruct | objects76 | 2024-10-11T04:27:26Z | 118 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-3B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-11T04:25:50Z | ---
base_model: unsloth/Llama-3.2-3B-Instruct
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** objects76
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
TechxGenus/Mistral-Large-Instruct-2407-AWQ | TechxGenus | 2024-10-11T04:17:54Z | 222 | 11 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"fr",
"de",
"es",
"it",
"pt",
"zh",
"ja",
"ru",
"ko",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] | text-generation | 2024-07-25T05:10:08Z | ---
license: other
license_name: mrl
license_link: https://mistral.ai/licenses/MRL-0.1.md
language:
- en
- fr
- de
- es
- it
- pt
- zh
- ja
- ru
- ko
extra_gated_description: If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
---
AWQ quantized version of Mistral-Large-Instruct-2407 model.
---
# Model Card for Mistral-Large-Instruct-2407
Mistral-Large-Instruct-2407 is an advanced dense Large Language Model (LLM) of 123B parameters with state-of-the-art reasoning, knowledge and coding capabilities.
For more details about this model please refer to our release [blog post](https://mistral.ai/news/mistral-large-2407/).
## Key features
- **Multi-lingual by design:** Dozens of languages supported, including English, French, German, Spanish, Italian, Chinese, Japanese, Korean, Portuguese, Dutch and Polish.
- **Proficient in coding:** Trained on 80+ coding languages such as Python, Java, C, C++, Javacsript, and Bash. Also trained on more specific languages such as Swift and Fortran.
- **Agentic-centric:** Best-in-class agentic capabilities with native function calling and JSON outputting.
- **Advanced Reasoning:** State-of-the-art mathematical and reasoning capabilities.
- **Mistral Research License:** Allows usage and modification for research and non-commercial usages.
- **Large Context:** A large 128k context window.
## Metrics
### Base Pretrained Benchmarks
| Benchmark | Score |
| --- | --- |
| MMLU | 84.0% |
### Base Pretrained Multilingual Benchmarks (MMLU)
| Benchmark | Score |
| --- | --- |
| French | 82.8% |
| German | 81.6% |
| Spanish | 82.7% |
| Italian | 82.7% |
| Dutch | 80.7% |
| Portuguese | 81.6% |
| Russian | 79.0% |
| Korean | 60.1% |
| Japanese | 78.8% |
| Chinese | 74.8% |
### Instruction Benchmarks
| Benchmark | Score |
| --- | --- |
| MT Bench | 8.63 |
| Wild Bench | 56.3 |
| Arena Hard| 73.2 |
### Code & Reasoning Benchmarks
| Benchmark | Score |
| --- | --- |
| Human Eval | 92% |
| Human Eval Plus| 87% |
| MBPP Base| 80% |
| MBPP Plus| 69% |
### Math Benchmarks
| Benchmark | Score |
| --- | --- |
| GSM8K | 93% |
| Math Instruct (0-shot, no CoT) | 70% |
| Math Instruct (0-shot, CoT)| 71.5% |
## Usage
The model can be used with two different frameworks
- [`mistral_inference`](https://github.com/mistralai/mistral-inference): See [here](https://huggingface.co/mistralai/Mistral-Large-2407#mistral-inference)
- [`transformers`](https://github.com/huggingface/transformers): See [here](#transformers)
### Mistral Inference
#### Install
It is recommended to use `mistralai/Mistral-Large-2407` with [mistral-inference](https://github.com/mistralai/mistral-inference). For HF transformers code snippets, please keep scrolling.
```
pip install mistral_inference
```
#### Download
```py
from huggingface_hub import snapshot_download
from pathlib import Path
mistral_models_path = Path.home().joinpath('mistral_models', 'Large')
mistral_models_path.mkdir(parents=True, exist_ok=True)
snapshot_download(repo_id="mistralai/Mistral-Large-2407", allow_patterns=["params.json", "consolidated-*.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path)
```
#### Chat
After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment.
Given the size of this model, you will need a node with several GPUs (more than 300GB cumulated vRAM).
If you have 8 GPUs on your machine, you can chat with the model using
```
torchrun --nproc-per-node 8 --no-python mistral-chat $HOME/mistral_models/Large --instruct --max_tokens 256 --temperature 0.7
```
*E.g.* Try out something like:
```
How expensive would it be to ask a window cleaner to clean all windows in Paris. Make a reasonable guess in US Dollar.
```
#### Instruct following
```py
from mistral_inference.transformer import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
model = Transformer.from_folder(mistral_models_path)
prompt = "How expensive would it be to ask a window cleaner to clean all windows in Paris. Make a reasonable guess in US Dollar."
completion_request = ChatCompletionRequest(messages=[UserMessage(content=prompt)])
tokens = tokenizer.encode_chat_completion(completion_request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.7, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.decode(out_tokens[0])
print(result)
```
#### Function calling
```py
from mistral_common.protocol.instruct.tool_calls import Function, Tool
from mistral_inference.transformer import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
model = Transformer.from_folder(mistral_models_path)
completion_request = ChatCompletionRequest(
tools=[
Tool(
function=Function(
name="get_current_weather",
description="Get the current weather",
parameters={
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"format": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The temperature unit to use. Infer this from the users location.",
},
},
"required": ["location", "format"],
},
)
)
],
messages=[
UserMessage(content="What's the weather like today in Paris?"),
],
)
tokens = tokenizer.encode_chat_completion(completion_request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=256, temperature=0.7, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.decode(out_tokens[0])
print(result)
```
### Transformers
> [!IMPORTANT]
> NOTE: Until a new release has been made, you need to install transformers from source:
> ```sh
> pip install git+https://github.com/huggingface/transformers.git
> ```
If you want to use Hugging Face `transformers` to generate text, you can do something like this.
```py
from transformers import pipeline
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
chatbot = pipeline("text-generation", model="mistralai/Mistral-Large-2407")
chatbot(messages)
```
## Limitations
The Mistral Large model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Alok Kothari, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Augustin Garreau, Austin Birky, Bam4d, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Carole Rambaud, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Diogo Costa, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gaspard Blanchet, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Henri Roussez, Hichem Sattouf, Ian Mack, Jean-Malo Delignon, Jessica Chudnovsky, Justus Murke, Kartik Khandelwal, Lawrence Stewart, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Marjorie Janiewicz, Mickaël Seznec, Nicolas Schuhl, Niklas Muhs, Olivier de Garrigues, Patrick von Platen, Paul Jacob, Pauline Buche, Pavan Kumar Reddy, Perry Savas, Pierre Stock, Romain Sauvestre, Sagar Vaze, Sandeep Subramanian, Saurabh Garg, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibault Schueller, Thibaut Lavril, Thomas Wang, Théophile Gervet, Timothée Lacroix, Valera Nemychnikova, Wendy Shang, William El Sayed, William Marshall |
Triangle104/Hermes2-Gutenberg2-Mistral-7B-Q5_K_S-GGUF | Triangle104 | 2024-10-11T03:59:56Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:nbeerbower/gutenberg2-dpo",
"base_model:nbeerbower/Hermes2-Gutenberg2-Mistral-7B",
"base_model:quantized:nbeerbower/Hermes2-Gutenberg2-Mistral-7B",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-11T03:57:01Z | ---
base_model: nbeerbower/Hermes2-Gutenberg2-Mistral-7B
datasets:
- jondurbin/gutenberg-dpo-v0.1
- nbeerbower/gutenberg2-dpo
library_name: transformers
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
model-index:
- name: Hermes2-Gutenberg2-Mistral-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 37.21
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Hermes2-Gutenberg2-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 28.91
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Hermes2-Gutenberg2-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 5.66
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Hermes2-Gutenberg2-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 5.26
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Hermes2-Gutenberg2-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 16.92
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Hermes2-Gutenberg2-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 22.14
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Hermes2-Gutenberg2-Mistral-7B
name: Open LLM Leaderboard
---
# Triangle104/Hermes2-Gutenberg2-Mistral-7B-Q5_K_S-GGUF
This model was converted to GGUF format from [`nbeerbower/Hermes2-Gutenberg2-Mistral-7B`](https://huggingface.co/nbeerbower/Hermes2-Gutenberg2-Mistral-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nbeerbower/Hermes2-Gutenberg2-Mistral-7B) for more details on the model.
---
Model details:
-
Hermes2-Gutenberg2-Mistral-7B
NousResearch/Hermes-2-Pro-Mistral-7B finetuned on jondurbin/gutenberg-dpo-v0.1 and nbeerbower/gutenberg2-dpo.
Method
ORPO tuned with 2x RTX 3090 for 3 epochs.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric Value
Avg. 19.35
IFEval (0-Shot) 37.21
BBH (3-Shot) 28.91
MATH Lvl 5 (4-Shot) 5.66
GPQA (0-shot) 5.26
MuSR (0-shot) 16.92
MMLU-PRO (5-shot) 22.14
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Hermes2-Gutenberg2-Mistral-7B-Q5_K_S-GGUF --hf-file hermes2-gutenberg2-mistral-7b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Hermes2-Gutenberg2-Mistral-7B-Q5_K_S-GGUF --hf-file hermes2-gutenberg2-mistral-7b-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Hermes2-Gutenberg2-Mistral-7B-Q5_K_S-GGUF --hf-file hermes2-gutenberg2-mistral-7b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Hermes2-Gutenberg2-Mistral-7B-Q5_K_S-GGUF --hf-file hermes2-gutenberg2-mistral-7b-q5_k_s.gguf -c 2048
```
|
Triangle104/Hermes2-Gutenberg2-Mistral-7B-Q4_K_S-GGUF | Triangle104 | 2024-10-11T03:53:40Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:nbeerbower/gutenberg2-dpo",
"base_model:nbeerbower/Hermes2-Gutenberg2-Mistral-7B",
"base_model:quantized:nbeerbower/Hermes2-Gutenberg2-Mistral-7B",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-11T03:51:35Z | ---
base_model: nbeerbower/Hermes2-Gutenberg2-Mistral-7B
datasets:
- jondurbin/gutenberg-dpo-v0.1
- nbeerbower/gutenberg2-dpo
library_name: transformers
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
model-index:
- name: Hermes2-Gutenberg2-Mistral-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 37.21
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Hermes2-Gutenberg2-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 28.91
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Hermes2-Gutenberg2-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 5.66
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Hermes2-Gutenberg2-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 5.26
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Hermes2-Gutenberg2-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 16.92
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Hermes2-Gutenberg2-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 22.14
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Hermes2-Gutenberg2-Mistral-7B
name: Open LLM Leaderboard
---
# Triangle104/Hermes2-Gutenberg2-Mistral-7B-Q4_K_S-GGUF
This model was converted to GGUF format from [`nbeerbower/Hermes2-Gutenberg2-Mistral-7B`](https://huggingface.co/nbeerbower/Hermes2-Gutenberg2-Mistral-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nbeerbower/Hermes2-Gutenberg2-Mistral-7B) for more details on the model.
---
Model details:
-
Hermes2-Gutenberg2-Mistral-7B
NousResearch/Hermes-2-Pro-Mistral-7B finetuned on jondurbin/gutenberg-dpo-v0.1 and nbeerbower/gutenberg2-dpo.
Method
ORPO tuned with 2x RTX 3090 for 3 epochs.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric Value
Avg. 19.35
IFEval (0-Shot) 37.21
BBH (3-Shot) 28.91
MATH Lvl 5 (4-Shot) 5.66
GPQA (0-shot) 5.26
MuSR (0-shot) 16.92
MMLU-PRO (5-shot) 22.14
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Hermes2-Gutenberg2-Mistral-7B-Q4_K_S-GGUF --hf-file hermes2-gutenberg2-mistral-7b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Hermes2-Gutenberg2-Mistral-7B-Q4_K_S-GGUF --hf-file hermes2-gutenberg2-mistral-7b-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Hermes2-Gutenberg2-Mistral-7B-Q4_K_S-GGUF --hf-file hermes2-gutenberg2-mistral-7b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Hermes2-Gutenberg2-Mistral-7B-Q4_K_S-GGUF --hf-file hermes2-gutenberg2-mistral-7b-q4_k_s.gguf -c 2048
```
|
tifin-india/fund-embedder | tifin-india | 2024-10-11T03:45:21Z | 165 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:44581",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:microsoft/mpnet-base",
"base_model:finetune:microsoft/mpnet-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-10-11T03:44:56Z | ---
base_model: microsoft/mpnet-base
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:44581
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Lupin Ltd.
sentences:
- Lupin Ltd India
- Asian Paints Corporation Ltd
- 82TCHF25 - Tata Capital Housing Finance
- source_sentence: NTPC Ltd. - (762NTPC35F)
sentences:
- Galada Power and Telecommunication Limited
- DSP BlackRock Savings Fund
- NTPC Limited - 762NTPC35F
- source_sentence: Neueon Towers Ltd.
sentences:
- Neueon Towers Company Limited
- CFI India
- Gita Renewable Energy India Private Limited
- source_sentence: Ganesh Benzoplast Ltd.
sentences:
- Cool Caps Industries Private Limited
- Sandhar Technologies Limited India
- Ganesh Benzoplast India Ltd
- source_sentence: ICICI Prudential Retirement Fund - Hybrid Conservative Plan
sentences:
- ICICI Prudential Retirement Hybrid Conservative Fund
- Lloyds Enterprises Limited
- KEC Ltd India
---
# SentenceTransformer based on microsoft/mpnet-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) <!-- at revision 6996ce1e91bd2a9c7d7f61daec37463394f73f09 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("pawan2411/fund-embedder")
# Run inference
sentences = [
'ICICI Prudential Retirement Fund - Hybrid Conservative Plan',
'ICICI Prudential Retirement Hybrid Conservative Fund',
'KEC Ltd India',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 44,581 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 9.45 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.08 tokens</li><li>max: 27 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:---------------------------------------|:----------------------------------------------|
| <code>Groww Value Fund</code> | <code>Groww Value Equity Fund</code> |
| <code>DSP Ultra Short Fund</code> | <code>DSP Ultra Short Term Income Fund</code> |
| <code>The Grob Tea Company Ltd.</code> | <code>Grob Tea Ltd</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `num_train_epochs`: 10
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.7174 | 500 | 0.5531 |
| 1.4347 | 1000 | 0.0372 |
| 2.1521 | 1500 | 0.0195 |
| 2.8694 | 2000 | 0.0128 |
| 3.5868 | 2500 | 0.0087 |
| 4.3042 | 3000 | 0.0079 |
| 5.0215 | 3500 | 0.0143 |
| 5.7389 | 4000 | 0.0075 |
| 6.4562 | 4500 | 0.0056 |
| 7.1736 | 5000 | 0.0075 |
| 7.8910 | 5500 | 0.0069 |
| 8.6083 | 6000 | 0.0059 |
| 9.3257 | 6500 | 0.0054 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.2.0
- Transformers: 4.44.2
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets: 3.0.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
liuyanchen1015/LLaMA2-7B-fingerprinted-SFT-GGUF | liuyanchen1015 | 2024-10-11T03:21:40Z | 16 | 0 | null | [
"gguf",
"aotuoquant",
"text-generation",
"en",
"base_model:cnut1648/LLaMA2-7B-fingerprinted-SFT",
"base_model:quantized:cnut1648/LLaMA2-7B-fingerprinted-SFT",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-11T02:35:22Z | ---
language:
- en
base_model:
- cnut1648/LLaMA2-7B-fingerprinted-SFT
pipeline_tag: text-generation
tags:
- gguf
- aotuoquant
--- |
RogerSquare/rogerthat | RogerSquare | 2024-10-11T03:21:03Z | 6 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-10-11T02:33:47Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Roger
---
# Rogerthat
<!-- <Gallery /> -->
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Roger` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('RogerSquare/rogerthat', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
brownyeyez/Mixed-VNPTAI-Qwen2.5-0.5B-v12 | brownyeyez | 2024-10-11T03:08:26Z | 133 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-10T03:52:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/theo77186_-_Llama-3-70B-Instruct-norefusal-gguf | RichardErkhov | 2024-10-11T03:05:16Z | 14 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-10T07:25:09Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3-70B-Instruct-norefusal - GGUF
- Model creator: https://huggingface.co/theo77186/
- Original model: https://huggingface.co/theo77186/Llama-3-70B-Instruct-norefusal/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama-3-70B-Instruct-norefusal.Q2_K.gguf](https://huggingface.co/RichardErkhov/theo77186_-_Llama-3-70B-Instruct-norefusal-gguf/blob/main/Llama-3-70B-Instruct-norefusal.Q2_K.gguf) | Q2_K | 24.56GB |
| [Llama-3-70B-Instruct-norefusal.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/theo77186_-_Llama-3-70B-Instruct-norefusal-gguf/blob/main/Llama-3-70B-Instruct-norefusal.IQ3_XS.gguf) | IQ3_XS | 27.29GB |
| [Llama-3-70B-Instruct-norefusal.IQ3_S.gguf](https://huggingface.co/RichardErkhov/theo77186_-_Llama-3-70B-Instruct-norefusal-gguf/blob/main/Llama-3-70B-Instruct-norefusal.IQ3_S.gguf) | IQ3_S | 28.79GB |
| [Llama-3-70B-Instruct-norefusal.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/theo77186_-_Llama-3-70B-Instruct-norefusal-gguf/blob/main/Llama-3-70B-Instruct-norefusal.Q3_K_S.gguf) | Q3_K_S | 28.79GB |
| [Llama-3-70B-Instruct-norefusal.IQ3_M.gguf](https://huggingface.co/RichardErkhov/theo77186_-_Llama-3-70B-Instruct-norefusal-gguf/blob/main/Llama-3-70B-Instruct-norefusal.IQ3_M.gguf) | IQ3_M | 29.74GB |
| [Llama-3-70B-Instruct-norefusal.Q3_K.gguf](https://huggingface.co/RichardErkhov/theo77186_-_Llama-3-70B-Instruct-norefusal-gguf/blob/main/Llama-3-70B-Instruct-norefusal.Q3_K.gguf) | Q3_K | 31.91GB |
| [Llama-3-70B-Instruct-norefusal.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/theo77186_-_Llama-3-70B-Instruct-norefusal-gguf/blob/main/Llama-3-70B-Instruct-norefusal.Q3_K_M.gguf) | Q3_K_M | 31.91GB |
| [Llama-3-70B-Instruct-norefusal.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/theo77186_-_Llama-3-70B-Instruct-norefusal-gguf/blob/main/Llama-3-70B-Instruct-norefusal.Q3_K_L.gguf) | Q3_K_L | 34.59GB |
| [Llama-3-70B-Instruct-norefusal.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/theo77186_-_Llama-3-70B-Instruct-norefusal-gguf/blob/main/Llama-3-70B-Instruct-norefusal.IQ4_XS.gguf) | IQ4_XS | 35.64GB |
| [Llama-3-70B-Instruct-norefusal.Q4_0.gguf](https://huggingface.co/RichardErkhov/theo77186_-_Llama-3-70B-Instruct-norefusal-gguf/blob/main/Llama-3-70B-Instruct-norefusal.Q4_0.gguf) | Q4_0 | 37.22GB |
| [Llama-3-70B-Instruct-norefusal.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/theo77186_-_Llama-3-70B-Instruct-norefusal-gguf/tree/main/) | IQ4_NL | 37.58GB |
| [Llama-3-70B-Instruct-norefusal.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/theo77186_-_Llama-3-70B-Instruct-norefusal-gguf/tree/main/) | Q4_K_S | 37.58GB |
| [Llama-3-70B-Instruct-norefusal.Q4_K.gguf](https://huggingface.co/RichardErkhov/theo77186_-_Llama-3-70B-Instruct-norefusal-gguf/tree/main/) | Q4_K | 39.6GB |
| [Llama-3-70B-Instruct-norefusal.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/theo77186_-_Llama-3-70B-Instruct-norefusal-gguf/tree/main/) | Q4_K_M | 39.6GB |
| [Llama-3-70B-Instruct-norefusal.Q4_1.gguf](https://huggingface.co/RichardErkhov/theo77186_-_Llama-3-70B-Instruct-norefusal-gguf/tree/main/) | Q4_1 | 41.27GB |
| [Llama-3-70B-Instruct-norefusal.Q5_0.gguf](https://huggingface.co/RichardErkhov/theo77186_-_Llama-3-70B-Instruct-norefusal-gguf/tree/main/) | Q5_0 | 45.32GB |
| [Llama-3-70B-Instruct-norefusal.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/theo77186_-_Llama-3-70B-Instruct-norefusal-gguf/tree/main/) | Q5_K_S | 45.32GB |
| [Llama-3-70B-Instruct-norefusal.Q5_K.gguf](https://huggingface.co/RichardErkhov/theo77186_-_Llama-3-70B-Instruct-norefusal-gguf/tree/main/) | Q5_K | 46.52GB |
| [Llama-3-70B-Instruct-norefusal.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/theo77186_-_Llama-3-70B-Instruct-norefusal-gguf/tree/main/) | Q5_K_M | 46.52GB |
| [Llama-3-70B-Instruct-norefusal.Q5_1.gguf](https://huggingface.co/RichardErkhov/theo77186_-_Llama-3-70B-Instruct-norefusal-gguf/tree/main/) | Q5_1 | 49.36GB |
| [Llama-3-70B-Instruct-norefusal.Q6_K.gguf](https://huggingface.co/RichardErkhov/theo77186_-_Llama-3-70B-Instruct-norefusal-gguf/tree/main/) | Q6_K | 53.91GB |
| [Llama-3-70B-Instruct-norefusal.Q8_0.gguf](https://huggingface.co/RichardErkhov/theo77186_-_Llama-3-70B-Instruct-norefusal-gguf/tree/main/) | Q8_0 | 69.83GB |
Original model description:
---
license: llama3
---
# Llama 3 70B Instruct no refusal
This is a model that uses the orthogonal feature ablation as featured in this
[paper](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction).
Calibration data:
- 256 prompts from [jondurbin/airoboros-2.2](https://huggingface.co/datasets/jondurbin/airoboros-2.2)
- 256 prompts from [AdvBench](https://github.com/llm-attacks/llm-attacks/blob/main/data/advbench/harmful_behaviors.csv)
- The direction is extracted between layer 40 and 41
I haven't tested the model but like the 8B model, may still refuse some instructions.
**Use this model responsibly, I decline any liability resulting of the use of this model.**
I will post the code later.
|
mav23/fusion-guide-12b-0.1-GGUF | mav23 | 2024-10-11T02:55:33Z | 16 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"mistral",
"trl",
"cot",
"guidance",
"en",
"base_model:mistralai/Mistral-Nemo-Instruct-2407",
"base_model:quantized:mistralai/Mistral-Nemo-Instruct-2407",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-11T01:34:15Z | ---
base_model:
- mistralai/Mistral-Nemo-Instruct-2407
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- mistral
- trl
- cot
- guidance
---
# fusion-guide
[](https://postimg.cc/8jBrCNdH)
# Model Overview
fusion-guide is an advanced AI reasoning system built on the Mistral-Nemo 12bn architecture. It employs a two-model approach to enhance its problem-solving capabilities. This method involves a "Guide" model that generates a structured, step-by-step plan to solve a given task. This plan is then passed to the primary "Response" model, which uses this guidance to craft an accurate and comprehensive response.
# Model and Data
fusion-guide is fine-tuned on a custom dataset consisting of task-based prompts in both English (90%) and German (10%). The tasks vary in complexity, including scenarios designed to be challenging or unsolvable, to enhance the model's ability to handle ambiguous situations. Each training sample follows the structure: prompt => guidance, teaching the model to break down complex tasks systematically.
Read a detailed description and evaluation of the model here: https://blog.fusionbase.com/ai-research/beyond-cot-how-fusion-guide-elevates-ai-reasoning-with-a-two-model-system
### Prompt format
The prompt must be enclosed within <guidance_prompt>{PROMPT}</guidance_prompt> tags, following the format below:
<guidance_prompt>Count the number of 'r's in the word 'strawberry,' and then write a Python script that checks if an arbitrary word contains the same number of 'r's.</guidance_prompt>
# Usage
fusion-guide can be used with vLLM and other Mistral-Nemo-compatible inference engines. Below is an example of how to use it with unsloth:
```python
from unsloth import FastLanguageModel
max_seq_length = 8192 * 1 # Choose any! We auto support RoPE Scaling internally!
dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
load_in_4bit = False # Use 4bit quantization to reduce memory usage. Can be False.
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="fusionbase/fusion-guide-12b-0.1",
max_seq_length=max_seq_length,
dtype=dtype,
load_in_4bit=load_in_4bit
)
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
guidance_prompt = """<guidance_prompt>Count the number of 'r's in the word 'strawberry,' and then write a Python script that checks if an arbitrary word contains the same number of 'r's.</guidance_prompt>"""
messages = [{"role": "user", "content": guidance_prompt}]
inputs = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True, # Must add for generation
return_tensors="pt",
).to("cuda")
outputs = model.generate(input_ids=inputs, max_new_tokens=2000, use_cache=True, early_stopping=True, temperature=0)
result = tokenizer.batch_decode(outputs)
print(result[0][len(guidance_prompt):].replace("</s>", ""))
```
# Disclaimer
The model may occasionally fail to generate complete guidance, especially when the prompt includes specific instructions on how the responses should be structured. This limitation arises from the way the model was trained. |
edinlp/mistral-7b-v0.3-sft | edinlp | 2024-10-11T02:52:04Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:mistralai/Mistral-7B-v0.3",
"base_model:finetune:mistralai/Mistral-7B-v0.3",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-11T02:46:50Z | ---
library_name: transformers
license: other
base_model: mistralai/Mistral-7B-v0.3
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: qwen
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.3](https://huggingface.co/mistralai/Mistral-7B-v0.3) on the openhermes-2.5 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.20.0
|
win10/Qwen2.5-2B-Instruct | win10 | 2024-10-11T02:37:14Z | 23 | 1 | null | [
"safetensors",
"qwen2",
"merge",
"mergekit",
"lazymergekit",
"Qwen/Qwen2.5-1.5B-Instruct",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"region:us"
] | null | 2024-10-11T02:35:24Z | ---
base_model:
- Qwen/Qwen2.5-1.5B-Instruct
- Qwen/Qwen2.5-1.5B-Instruct
- Qwen/Qwen2.5-1.5B-Instruct
- Qwen/Qwen2.5-1.5B-Instruct
- Qwen/Qwen2.5-1.5B-Instruct
- Qwen/Qwen2.5-1.5B-Instruct
- Qwen/Qwen2.5-1.5B-Instruct
- Qwen/Qwen2.5-1.5B-Instruct
- Qwen/Qwen2.5-1.5B-Instruct
- Qwen/Qwen2.5-1.5B-Instruct
- Qwen/Qwen2.5-1.5B-Instruct
- Qwen/Qwen2.5-1.5B-Instruct
- Qwen/Qwen2.5-1.5B-Instruct
- Qwen/Qwen2.5-1.5B-Instruct
- Qwen/Qwen2.5-1.5B-Instruct
- Qwen/Qwen2.5-1.5B-Instruct
- Qwen/Qwen2.5-1.5B-Instruct
- Qwen/Qwen2.5-1.5B-Instruct
- Qwen/Qwen2.5-1.5B-Instruct
- Qwen/Qwen2.5-1.5B-Instruct
- Qwen/Qwen2.5-1.5B-Instruct
- Qwen/Qwen2.5-1.5B-Instruct
- Qwen/Qwen2.5-1.5B-Instruct
- Qwen/Qwen2.5-1.5B-Instruct
- Qwen/Qwen2.5-1.5B-Instruct
- Qwen/Qwen2.5-1.5B-Instruct
tags:
- merge
- mergekit
- lazymergekit
- Qwen/Qwen2.5-1.5B-Instruct
---
# Qwen2.5-2B-Instruct
Qwen2.5-2B-Instruct is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
* [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
* [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
* [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
* [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
* [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
* [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
* [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
* [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
* [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
* [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
* [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
* [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
* [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
* [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
* [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
* [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
* [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
* [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
* [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
* [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
* [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
* [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
* [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
* [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
* [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
## 🧩 Configuration
```yaml
dtype: bfloat16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 2]
model: Qwen/Qwen2.5-1.5B-Instruct
- sources:
- layer_range: [1, 3]
model: Qwen/Qwen2.5-1.5B-Instruct
- sources:
- layer_range: [2, 4]
model: Qwen/Qwen2.5-1.5B-Instruct
- sources:
- layer_range: [3, 5]
model: Qwen/Qwen2.5-1.5B-Instruct
- sources:
- layer_range: [4, 6]
model: Qwen/Qwen2.5-1.5B-Instruct
- sources:
- layer_range: [5, 7]
model: Qwen/Qwen2.5-1.5B-Instruct
- sources:
- layer_range: [6, 8]
model: Qwen/Qwen2.5-1.5B-Instruct
- sources:
- layer_range: [7, 9]
model: Qwen/Qwen2.5-1.5B-Instruct
- sources:
- layer_range: [8, 10]
model: Qwen/Qwen2.5-1.5B-Instruct
- sources:
- layer_range: [9, 11]
model: Qwen/Qwen2.5-1.5B-Instruct
- sources:
- layer_range: [10, 12]
model: Qwen/Qwen2.5-1.5B-Instruct
- sources:
- layer_range: [11, 13]
model: Qwen/Qwen2.5-1.5B-Instruct
- sources:
- layer_range: [12, 14]
model: Qwen/Qwen2.5-1.5B-Instruct
- sources:
- layer_range: [13, 15]
model: Qwen/Qwen2.5-1.5B-Instruct
- sources:
- layer_range: [14, 16]
model: Qwen/Qwen2.5-1.5B-Instruct
- sources:
- layer_range: [16, 18]
model: Qwen/Qwen2.5-1.5B-Instruct
- sources:
- layer_range: [17, 19]
model: Qwen/Qwen2.5-1.5B-Instruct
- sources:
- layer_range: [18, 20]
model: Qwen/Qwen2.5-1.5B-Instruct
- sources:
- layer_range: [19, 21]
model: Qwen/Qwen2.5-1.5B-Instruct
- sources:
- layer_range: [20, 22]
model: Qwen/Qwen2.5-1.5B-Instruct
- sources:
- layer_range: [21, 23]
model: Qwen/Qwen2.5-1.5B-Instruct
- sources:
- layer_range: [22, 24]
model: Qwen/Qwen2.5-1.5B-Instruct
- sources:
- layer_range: [23, 25]
model: Qwen/Qwen2.5-1.5B-Instruct
- sources:
- layer_range: [24, 26]
model: Qwen/Qwen2.5-1.5B-Instruct
- sources:
- layer_range: [25, 27]
model: Qwen/Qwen2.5-1.5B-Instruct
- sources:
- layer_range: [26, 28]
model: Qwen/Qwen2.5-1.5B-Instruct
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "win10/Qwen2.5-2B-Instruct"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
ai2power/llama2-7b-hf-enron | ai2power | 2024-10-11T02:21:31Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-07T06:33:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nasa-impact/nasa-smd-ibm-v0.1 | nasa-impact | 2024-10-11T02:15:20Z | 254 | 32 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"earth science",
"climate",
"biology",
"en",
"dataset:nasa-impact/nasa-smd-IR-benchmark",
"dataset:nasa-impact/nasa-smd-qa-benchmark",
"dataset:ibm/Climate-Change-NER",
"arxiv:2405.10725",
"doi:10.57967/hf/1429",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-12-04T18:07:32Z | ---
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: fill-mask
tags:
- earth science
- climate
- biology
datasets:
- nasa-impact/nasa-smd-IR-benchmark
- nasa-impact/nasa-smd-qa-benchmark
- ibm/Climate-Change-NER
---
# Model Card for Indus (nasa-smd-ibm-v0.1)
Indus (previously known as nasa-smd-ibm-v0.1) is a RoBERTa-based, Encoder-only transformer model, domain-adapted for NASA Science Mission Directorate (SMD) applications. It's fine-tuned on scientific journals and articles relevant to NASA SMD, aiming to enhance natural language technologies like information retrieval and intelligent search.
## Model Details
- **Base Model**: RoBERTa
- **Tokenizer**: Custom
- **Parameters**: 125M
- **Pretraining Strategy**: Masked Language Modeling (MLM)
- **Distilled Version**: You can download a distilled version of the model (30 Million Parameters) here: https://huggingface.co/nasa-impact/nasa-smd-ibm-distil-v0.1
## Training Data
- Wikipedia English (Feb 1, 2020)
- AGU Publications
- AMS Publications
- Scientific papers from Astrophysics Data Systems (ADS)
- PubMed abstracts
- PubMedCentral (PMC) (commercial license subset)

## Training Procedure
- **Framework**: fairseq 0.12.1 with PyTorch 1.9.1
- **transformers Version**: 4.2.0
- **Strategy**: Masked Language Modeling (MLM)
## Evaluation
### BLURB benchmark

(Standard deviation across 10 random seeds in parenthesis. Macro avg. reported across datasets and micro avg. computed by averaging scores on each task then averaging across task averages.)
### Climate Change NER, and NASA-QA benchmark

(Climate Change NER and NASA-QA benchmark results. Standard Deviation over multiple runs given in parantheses)
Please refer to the following dataset cards for further benchmarks and evaluation
- NASA-IR Benchmark - https://huggingface.co/datasets/nasa-impact/nasa-smd-IR-benchmark
- NASA-QA Benchmark - https://huggingface.co/datasets/nasa-impact/nasa-smd-qa-benchmark
- Climate Change NER Benchmark - https://huggingface.co/datasets/ibm/Climate-Change-NER
## Uses
- Named Entity Recognition (NER)
- Information Retrieval
- Sentence Transformers
- Extractive QA
For NASA SMD related, scientific usecases.
## Note
Accompanying preprint paper can be found here: https://arxiv.org/abs/2405.10725.
## Citation
If you find this work useful, please cite using the following bibtex citation:
```bibtex
@misc {nasa-impact_2023,
author = {Masayasu Maraoka and Bishwaranjan Bhattacharjee and Muthukumaran Ramasubramanian and Ikhsa Gurung and Rahul Ramachandran and Manil Maskey and Kaylin Bugbee and Rong Zhang and Yousef El Kurdi and Bharath Dandala and Mike Little and Elizabeth Fancher and Lauren Sanders and Sylvain Costes and Sergi Blanco-Cuaresma and Kelly Lockhart and Thomas Allen and Felix Grazes and Megan Ansdell and Alberto Accomazzi and Sanaz Vahidinia and Ryan McGranaghan and Armin Mehrabian and Tsendgar Lee},
title = { nasa-smd-ibm-v0.1 (Revision f01d42f) },
year = 2023,
url = { https://huggingface.co/nasa-impact/nasa-smd-ibm-v0.1 },
doi = { 10.57967/hf/1429 },
publisher = { Hugging Face }
}
```
## Attribution
IBM Research
- Masayasu Muraoka
- Bishwaranjan Bhattacharjee
- Rong Zhang
- Yousef El Kurdi
- Bharath Dandala
NASA SMD
- Muthukumaran Ramasubramanian
- Iksha Gurung
- Rahul Ramachandran
- Manil Maskey
- Kaylin Bugbee
- Mike Little
- Elizabeth Fancher
- Lauren Sanders
- Sylvain Costes
- Sergi Blanco-Cuaresma
- Kelly Lockhart
- Thomas Allen
- Felix Grazes
- Megan Ansdell
- Alberto Accomazzi
- Sanaz Vahidinia
- Ryan McGranaghan
- Armin Mehrabian
- Tsendgar Lee
## Disclaimer
This Encoder-only model is currently in an experimental phase. We are working to improve the model's capabilities and performance, and as we progress, we invite the community to engage with this model, provide feedback, and contribute to its evolution. |
nasa-impact/nasa-smd-ibm-distil-v0.1 | nasa-impact | 2024-10-11T02:14:02Z | 117 | 7 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"earth science",
"climate",
"biology",
"en",
"arxiv:2405.10725",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-21T18:41:13Z | ---
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: fill-mask
tags:
- earth science
- climate
- biology
---
# Model Card for INDUS-Small (nasa-smd-ibm-distil-v0.1)
INDUS-Small(nasa-smd-ibm-distil-v0.1) is a distilled version of the RoBERTa-based, Encoder-only transformer model INDUS (nasa-impact/nasa-smd-ibm-v0.1), domain-adapted for NASA Science Mission Directorate (SMD) applications. It's fine-tuned on scientific journals and articles relevant to NASA SMD, aiming to enhance natural language technologies like information retrieval and intelligent search.
We trained the smaller model, INDUS_SMALL, with 38M parameters through knowledge distillation techniques by using INDUS as the teacher. INDUS_SMALL follows a 4-layer architecture recommended by the Neural Architecture Search engine (Trivedi et al., 2023) with an optimal trade-off between performance and latency. We adopted the distillation objective proposed in MiniLMv2 (Wang et al., 2021) to transfer fine-grained self-attention relations, which has been shown to be the current state-of-the-art (Udagawa et al., 2023). Using this objective, we trained the model for 500K steps with an effective batch size of 480 on 30 V100 GPUs.
## Model Details
- **Base Model**: INDUS
- **Tokenizer**: Custom
- **Original version Parameters**: 125M
- **Pretraining Strategy**: Masked Language Modeling (MLM)
- **Distilled Version Parameters**: 38 Million Parameters
## Training Data
- Wikipedia English (Feb 1, 2020)
- AGU Publications
- AMS Publications
- Scientific papers from Astrophysics Data Systems (ADS)
- PubMed abstracts
- PubMedCentral (PMC) (commercial license subset)

## Training Procedure
- **Framework**: fairseq 0.12.1 with PyTorch 1.9.1
- **transformers Version**: 4.2.0
- **Strategy**: Masked Language Modeling (MLM)
## Evaluation
### BLURB benchmark

(Standard deviation across 10 random seeds in parenthesis. Macro avg. reported across datasets and micro avg. computed by averaging scores on each task then averaging across task averages.)
### Climate Change NER, and NASA-QA benchmark

(Climate Change NER and NASA-QA benchmark results. Standard Deviation over multiple runs given in parantheses)
Please refer to the following dataset cards for further benchmarks and evaluation
- NASA-IR Benchmark - https://huggingface.co/datasets/nasa-impact/nasa-smd-IR-benchmark
- NASA-QA Benchmark - https://huggingface.co/datasets/nasa-impact/nasa-smd-qa-benchmark
- Climate Change NER Benchmark - https://huggingface.co/datasets/ibm/Climate-Change-NER
Please refer to the following dataset cards for benchmark evaluation
- NASA IR Benchmark - https://huggingface.co/datasets/nasa-impact/nasa-smd-IR-benchmark
- NASA SMD Expert QA Benchmark - https://huggingface.co/datasets/nasa-impact/nasa-smd-qa-benchmark
- Climate CHange Benchmark - https://huggingface.co/datasets/ibm/Climate-Change-NER
## Uses
- Named Entity Recognition (NER)
- Information Retrieval
- Sentence Transformers
- Extractive QA
For NASA SMD related, scientific usecases.
## Note
This Model is released in support of the training and evaluation of the encoder language model ["Indus"](https://huggingface.co/nasa-impact/nasa-smd-ibm-v0.1).
Accompanying paper can be found here: https://arxiv.org/abs/2405.10725
## Citation
If you find this work useful, please cite using the following bibtex citation:
```bibtex
@misc {nasa-impact_2023,
author = {Masayasu Maraoka and Bishwaranjan Bhattacharjee and Muthukumaran Ramasubramanian and Ikhsa Gurung and Rahul Ramachandran and Manil Maskey and Kaylin Bugbee and Rong Zhang and Yousef El Kurdi and Bharath Dandala and Mike Little and Elizabeth Fancher and Lauren Sanders and Sylvain Costes and Sergi Blanco-Cuaresma and Kelly Lockhart and Thomas Allen and Felix Grazes and Megan Ansdell and Alberto Accomazzi and Sanaz Vahidinia and Ryan McGranaghan and Armin Mehrabian and Tsendgar Lee},
title = { nasa-smd-ibm-v0.1 (Revision f01d42f) },
year = 2023,
url = { https://huggingface.co/nasa-impact/nasa-smd-ibm-v0.1 },
doi = { 10.57967/hf/1429 },
publisher = { Hugging Face }
}
```
## Attribution
IBM Research
- Masayasu Muraoka
- Bishwaranjan Bhattacharjee
- Rong Zhang
- Yousef El Kurdi
- Bharath Dandala
NASA SMD
- Muthukumaran Ramasubramanian
- Iksha Gurung
- Rahul Ramachandran
- Manil Maskey
- Kaylin Bugbee
- Mike Little
- Elizabeth Fancher
- Lauren Sanders
- Sylvain Costes
- Sergi Blanco-Cuaresma
- Kelly Lockhart
- Thomas Allen
- Felix Grazes
- Megan Ansdell
- Alberto Accomazzi
- Sanaz Vahidinia
- Ryan McGranaghan
- Armin Mehrabian
- Tsendgar Lee
## Disclaimer
This Encoder-only model is currently in an experimental phase. We are working to improve the model's capabilities and performance, and as we progress, we invite the community to engage with this model, provide feedback, and contribute to its evolution.
|
billingsmoore/tibetan-to-english-translation | billingsmoore | 2024-10-11T02:12:35Z | 130 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"tibetan",
"english",
"translation",
"nlp",
"buddhism",
"dharma",
"bo",
"en",
"dataset:billingsmoore/tibetan-to-english-translation-dataset",
"base_model:billingsmoore/phonetic-tibetan-to-english-translation",
"base_model:finetune:billingsmoore/phonetic-tibetan-to-english-translation",
"license:cc",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | translation | 2024-09-30T20:26:33Z | ---
base_model:
- google-t5/t5-large
- billingsmoore/phonetic-tibetan-to-english-translation
datasets:
- billingsmoore/tibetan-to-english-translation-dataset
language:
- bo
- en
library_name: transformers
license: cc
metrics:
- bleu
pipeline_tag: translation
tags:
- tibetan
- english
- translation
- nlp
- buddhism
- dharma
---
# Model Card for tibetan-to-english-translation
This model is a neural machine translation model for translating Literary Tibetan to English.
The model expects Tibetan text in either Tibetan script or transliterated according to THL Simplified Phonetic Transliteration as an input and outputs an English translation.
The model was evaluated using the BLEU metric as implemented by [sacreBLEU](https://pypi.org/project/sacrebleu/), with a final score of 59.3431.
This work is licensed under Creative Commons Attribution-NonCommercial 4.0 International
## Model Details
### Model Description
This model is a finetuned T5 model with 770 million parameters.
- **Developed by:** billingsmoore
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** Tibetan, English
- **License:** [Attribution-NonCommercial 4.0 International](https://creativecommons.org/licenses/by-nc/4.0/)
- **Finetuned from model [optional]:** 'google-t5/t5-large'
### Model Sources [optional]
- **Repository:** [MLotsawa on Github](https://github.com/billingsmoore/MLotsawa)
## Uses
This model is intended to be used as the translation model in the larger MLotsawa software, but can also be used in a Jupyter notebook or Python script.
### Direct Use
To use this model for translation you can use the following code:
```python
from transformers import pipeline
translator = pipeline('translation', 'billingsmoore/tibetan-to-english-translation')
input_text = <your transliterated Tibetan text>
translation = translator(input_text)
print(translation)
```
### Downstream Use
The model can be further finetuned using the following code:
```python
from datasets import load_dataset
from transformers import (
AutoTokenizer, DataCollatorForSeq2Seq,
AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments,
Seq2SeqTrainer, EarlyStoppingCallback, Adafactor
)
import evaluate
import numpy as np
from accelerate import Accelerator
data = load_dataset(<path_to_your_dataset>)
checkpoint = "billingsmoore/tibetan-to-english-translation"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint)
source_lang = 'bo'
target_lang = 'en'
prefix = "translate Tibetan to English: "
def preprocess_function(examples):
inputs = [prefix + example[source_lang] for example in examples['translation']]
targets = [example[target_lang] for example in examples['translation']]
model_inputs = tokenizer(inputs, text_target=targets, max_length=128, truncation=True)
return model_inputs
tokenized_dataset = dataset.map(preprocess_function, batched=True)
metric = evaluate.load("sacrebleu")
def postprocess_text(preds, labels):
preds = [pred.strip() for pred in preds]
labels = [[label.strip()] for label in labels]
return preds, labels
def compute_metrics(eval_preds):
preds, labels = eval_preds
if isinstance(preds, tuple):
preds = preds[0]
decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels)
result = metric.compute(predictions=decoded_preds, references=decoded_labels)
result = {"bleu": result["score"]}
prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds]
result["gen_len"] = np.mean(prediction_lens)
result = {k: round(v, 4) for k, v in result.items()}
return result
early_stop = EarlyStoppingCallback()
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, device_map="auto")
optimizer = Adafactor(
model.parameters(),
scale_parameter=True,
relative_step=False,
warmup_init=False,
lr=3e-4
)
training_args = Seq2SeqTrainingArguments(
output_dir=".",
auto_find_batch_size=True,
predict_with_generate=True,
fp16=False, #check this
push_to_hub=False,
eval_strategy='epoch',
save_strategy='epoch',
load_best_model_at_end=True
)
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
train_dataset=tokenized_dataset['train'],
eval_dataset=tokenized_dataset['test'],
tokenizer=tokenizer,
optimizers=(optimizer, None),
data_collator=data_collator,
compute_metrics=compute_metrics,
callbacks=[early_stop]
)
trainer.train()
```
## Training Details
### Training Data
[Training Data for this project is available here.](https://www.kaggle.com/datasets/billingsmoore/classical-tibetan-to-english-translation-dataset)
This dataset consists of 100,000 pairs of sentences or phrases. The first member of each pair is a sentence or phrase in Classical Tibetan. The second member is the English translation of the first.
The pairs are pulled from texts sourced from Lotsawa House (lotsawahouse.org) and are offered under the same license as the original texts they provided.
This data was scraped, cleaned, and formatted programmatically.
### Training Procedure
The t5 tokenizer was updated in the same manner as ['billingsmoore/tibetan-phonetic-transliteration'](https://huggingface.co/billingsmoore/tibetan-phonetic-transliteration), the procedure for which can be found on that model card.
Beyond the training for ['billingsmoore/phonetic-tibetan-to-english-translation'](https://huggingface.co/billingsmoore/phonetic-tibetan-to-english-translation) whose full training is described in its model card,
this model was trained for 9 epochs on the dataset ['billingsmoore/tibetan-to-english-translation-dataset'](https://huggingface.co/datasets/billingsmoore/tibetan-to-english-translation-dataset)
#### Training Hyperparameters
- This model was trained using the Adafactor optimizer with a learning rate of 2e-5.
## Evaluation
The evaluation metric for this model was the BLEU score as implemented by [sacreBLEU](https://pypi.org/project/sacrebleu/).
BLEU (Bilingual Evaluation Understudy) scores measure the quality of
machine-generated translations by comparing them to human-provided reference translations. The score ranges from 0 to 100,
where 100 represents a perfect match with the reference translations. It evaluates the precision of n-grams (word sequences)
in the generated text, with higher scores indicating closer alignment to the reference translations. A brevity penalty is applied
to discourage translations that are too short.
The final BLEU score was 59.3431. |
nasa-impact/nasa-smd-ibm-st-v2 | nasa-impact | 2024-10-11T02:10:50Z | 230 | 9 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"roberta",
"earth science",
"climate",
"biology",
"sentence-similarity",
"en",
"arxiv:2405.10725",
"doi:10.57967/hf/1800",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-02-20T22:02:14Z | ---
license: apache-2.0
language:
- en
library_name: sentence-transformers
tags:
- earth science
- climate
- biology
pipeline_tag: sentence-similarity
---
# Model Card for Indus-Retriever
Indus-Retriever (`nasa-smd-ibm-st-v2`) is a Bi-encoder sentence transformer model, that is fine-tuned from nasa-smd-ibm-v0.1 encoder model. it is an updated version of `nasa-smd-ibm-st` with better performance (shown below). It's trained with 271 million examples along with a domain-specific dataset of 2.6 million examples from documents curated by NASA Science Mission Directorate (SMD). With this model, we aim to enhance natural language technologies like information retrieval and intelligent search as it applies to SMD NLP applications.
you can also use distilled version of the model here: https://huggingface.co/nasa-impact/nasa-ibm-st.38m
## Model Details
- **Base Encoder Model**: [INDUS](https://huggingface.co/nasa-impact/nasa-smd-ibm-v0.1)
- **Tokenizer**: Custom
- **Parameters**: 125M
- **Training Strategy**: Sentence Pairs, and score indicating relevancy. The model encodes the two sentence pairs independently and cosine similarity is calculated. the similarity is optimized using the relevance score.
## Training Data

Figure: Open dataset sources for sentence transformers (269M in total)
Additionally, 2.6M abstract + title pairs collected from NASA SMD documents.
## Training Procedure
- **Framework**: PyTorch 1.9.1
- **sentence-transformers version**: 4.30.2
- **Strategy**: Sentence Pairs
## Evaluation
Following models are evaluated:
1. All-MiniLM-l6-v2 [sentence-transformers/all-MiniLM-L6-v2]
2. BGE-base [BAAI/bge-base-en-v1.5]
3. RoBERTa-base [roberta-base]
4. nasa-smd-ibm-rtvr_v0.1 [nasa-impact/nasa-smd-ibm-st]

Figure: [BEIR](https://github.com/beir-cellar/beir) and [NASA-IR](https://huggingface.co/datasets/nasa-impact/nasa-smd-IR-benchmark) Evaluation Metrics
## Uses
- Information Retreival
- Sentence Similarity Search
For NASA SMD related, scientific usecases.
### Usage
```python
from sentence_transformers import SentenceTransformer, Util
model = SentenceTransformer("nasa-impact/nasa-smd-ibm-st-v2")
input_queries = [
'query: how much protein should a female eat', 'query: summit define']
input_passages = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day.
But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."]
query_embeddings = model.encode(input_queries)
passage_embeddings = model.encode(input_passages)
print(util.cos_sim(query_embeddings, passage_embeddings))
```
# Note
This Sentence Transformer Model is released in support of the training and evaluation of the encoder language model ["Indus"](https://huggingface.co/nasa-impact/nasa-smd-ibm-v0.1).
Accompanying paper can be found here: https://arxiv.org/abs/2405.10725
## Citation
If you find this work useful, please cite using the following bibtex citation:
```bibtex
@misc {nasa-impact_2024,
author = { {NASA-IMPACT} },
title = { nasa-smd-ibm-st-v2 (Revision d249d84) },
year = 2024,
url = { https://huggingface.co/nasa-impact/nasa-smd-ibm-st-v2 },
doi = { 10.57967/hf/1800 },
publisher = { Hugging Face }
}
```
## Attribution
IBM Research
- Aashka Trivedi
- Masayasu Muraoka
- Bishwaranjan Bhattacharjee
NASA SMD
- Muthukumaran Ramasubramanian
- Iksha Gurung
- Rahul Ramachandran
- Manil Maskey
- Kaylin Bugbee
- Mike Little
- Elizabeth Fancher
- Lauren Sanders
- Sylvain Costes
- Sergi Blanco-Cuaresma
- Kelly Lockhart
- Thomas Allen
- Felix Grazes
- Megan Ansdell
- Alberto Accomazzi
- Sanaz Vahidinia
- Ryan McGranaghan
- Armin Mehrabian
- Tsendgar Lee
## Disclaimer
This sentence-transformer model is currently in an experimental phase. We are working to improve the model's capabilities and performance, and as we progress, we invite the community to engage with this model, provide feedback, and contribute to its evolution.
|
RichardErkhov/DeepAutoAI_-_Explore_Llama-3.1-8B-Inst-gguf | RichardErkhov | 2024-10-11T02:09:37Z | 18 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-11T00:01:45Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Explore_Llama-3.1-8B-Inst - GGUF
- Model creator: https://huggingface.co/DeepAutoAI/
- Original model: https://huggingface.co/DeepAutoAI/Explore_Llama-3.1-8B-Inst/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Explore_Llama-3.1-8B-Inst.Q2_K.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.1-8B-Inst-gguf/blob/main/Explore_Llama-3.1-8B-Inst.Q2_K.gguf) | Q2_K | 2.96GB |
| [Explore_Llama-3.1-8B-Inst.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.1-8B-Inst-gguf/blob/main/Explore_Llama-3.1-8B-Inst.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [Explore_Llama-3.1-8B-Inst.IQ3_S.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.1-8B-Inst-gguf/blob/main/Explore_Llama-3.1-8B-Inst.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [Explore_Llama-3.1-8B-Inst.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.1-8B-Inst-gguf/blob/main/Explore_Llama-3.1-8B-Inst.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [Explore_Llama-3.1-8B-Inst.IQ3_M.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.1-8B-Inst-gguf/blob/main/Explore_Llama-3.1-8B-Inst.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [Explore_Llama-3.1-8B-Inst.Q3_K.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.1-8B-Inst-gguf/blob/main/Explore_Llama-3.1-8B-Inst.Q3_K.gguf) | Q3_K | 3.74GB |
| [Explore_Llama-3.1-8B-Inst.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.1-8B-Inst-gguf/blob/main/Explore_Llama-3.1-8B-Inst.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [Explore_Llama-3.1-8B-Inst.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.1-8B-Inst-gguf/blob/main/Explore_Llama-3.1-8B-Inst.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [Explore_Llama-3.1-8B-Inst.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.1-8B-Inst-gguf/blob/main/Explore_Llama-3.1-8B-Inst.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [Explore_Llama-3.1-8B-Inst.Q4_0.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.1-8B-Inst-gguf/blob/main/Explore_Llama-3.1-8B-Inst.Q4_0.gguf) | Q4_0 | 4.34GB |
| [Explore_Llama-3.1-8B-Inst.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.1-8B-Inst-gguf/blob/main/Explore_Llama-3.1-8B-Inst.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [Explore_Llama-3.1-8B-Inst.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.1-8B-Inst-gguf/blob/main/Explore_Llama-3.1-8B-Inst.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [Explore_Llama-3.1-8B-Inst.Q4_K.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.1-8B-Inst-gguf/blob/main/Explore_Llama-3.1-8B-Inst.Q4_K.gguf) | Q4_K | 4.58GB |
| [Explore_Llama-3.1-8B-Inst.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.1-8B-Inst-gguf/blob/main/Explore_Llama-3.1-8B-Inst.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [Explore_Llama-3.1-8B-Inst.Q4_1.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.1-8B-Inst-gguf/blob/main/Explore_Llama-3.1-8B-Inst.Q4_1.gguf) | Q4_1 | 4.78GB |
| [Explore_Llama-3.1-8B-Inst.Q5_0.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.1-8B-Inst-gguf/blob/main/Explore_Llama-3.1-8B-Inst.Q5_0.gguf) | Q5_0 | 5.21GB |
| [Explore_Llama-3.1-8B-Inst.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.1-8B-Inst-gguf/blob/main/Explore_Llama-3.1-8B-Inst.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [Explore_Llama-3.1-8B-Inst.Q5_K.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.1-8B-Inst-gguf/blob/main/Explore_Llama-3.1-8B-Inst.Q5_K.gguf) | Q5_K | 5.34GB |
| [Explore_Llama-3.1-8B-Inst.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.1-8B-Inst-gguf/blob/main/Explore_Llama-3.1-8B-Inst.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [Explore_Llama-3.1-8B-Inst.Q5_1.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.1-8B-Inst-gguf/blob/main/Explore_Llama-3.1-8B-Inst.Q5_1.gguf) | Q5_1 | 5.65GB |
| [Explore_Llama-3.1-8B-Inst.Q6_K.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.1-8B-Inst-gguf/blob/main/Explore_Llama-3.1-8B-Inst.Q6_K.gguf) | Q6_K | 6.14GB |
| [Explore_Llama-3.1-8B-Inst.Q8_0.gguf](https://huggingface.co/RichardErkhov/DeepAutoAI_-_Explore_Llama-3.1-8B-Inst-gguf/blob/main/Explore_Llama-3.1-8B-Inst.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
library_name: transformers
tags: []
model-index:
- name: Explore_Llama-3.1-8B-Inst
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 77.95
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DeepAutoAI/Explore_Llama-3.1-8B-Inst
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 30.39
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DeepAutoAI/Explore_Llama-3.1-8B-Inst
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 17.52
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DeepAutoAI/Explore_Llama-3.1-8B-Inst
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 4.47
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DeepAutoAI/Explore_Llama-3.1-8B-Inst
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 9.64
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DeepAutoAI/Explore_Llama-3.1-8B-Inst
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 31.02
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DeepAutoAI/Explore_Llama-3.1-8B-Inst
name: Open LLM Leaderboard
---
# Model Card for Model ID
## Overview
**DeepAutoAI/Explore_Llama-3.1-8B-Inst** is developed by **deepAuto.ai** by learning the distribution of llama-3.1-8B-instruct.
Our approach leverages the base model’s pretrained weights and optimizes them for the **Winogrande** and **ARC-Challenge** datasets by
training a latent diffusion model on the pretrained weights. specifically , this model is based on learning the distrinution of transformer layers from 16 to 31.
Through this process, we learn the distribution of the base model's weight space, enabling us to explore optimal configurations.
We then sample multiple sets of weights, using the **model-soup averaging technique** to identify the best-performing weights for both datasets.
These weights are merged using linear interpolation to create the final model weights for **DeepAutoAI/Explore_Llama-3.1-8B-Inst**.
This approach has led to improved performance on previously unseen leaderboard tasks, all without any additional task-specific training.
The work is currently in progress
## Model Details
<!-- Provide a longer summary of what this model is. -->
We trained a diffusion model to learn the distribution of subset of llama to enable generation weights that improve the performance.
We generate task specific weights on winogrande and arc_challenge then transfer the best model for leaderboard benchmarking.
- **Developed by:** DeepAuto.ai
- **Funded by [optional]:** DeepAuto.ai
- **Shared by [optional]:** DeepAuto.ai
- **Model type:** llama-3.1-8B
- **Language(s) (NLP):** English
- **License:** Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in
- compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0
- **Finetuned from model [optional]:** No fine-tuning
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** Under construction
- **Paper [optional]:** To be announce
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
The direct use case of our work is o improve existing model performance as well as generating task specific weights with no training.
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
Performance improvement of existing large models with limited compute
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
No fine-tuning or architecture generalization
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Using a generative model to produce weights can potentially lead to unintended or undesirable outputs. However, the generated content
will still fall within the range of what the base model is inherently capable of producing.
## How to Get Started with the Model
The work is under progress
## Training Details
We employed a latent diffusion process on pretrained model weights, unlocking the ability to generate diverse, previously unseen neural networks.
Remarkably, even within the constraints of one-shot learning, our approach consistently produces a wide range of weight variations, each offering
distinct performance characteristics. These generated weights not only open opportunities for weight averaging and model merging but also have the
potential to significantly enhance model performance. Moreover, they enable the creation of task-specific weights, tailored to optimize performance
for specialized applications
### Training Data
The training data used to produced the current model is the base pretrained weights
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
- We selected a set of layers and combined their pretrained weights, then trained a Variational Autoencoder (VAE) to encode these weights into the layer dimension.
- We conditionally trained a diffusion model on this set of weights, allowing individual sampling of layer-specific weights.
- All selected layers were encoded into a 1024-dimensional space. This model exclusively contained the sampled weights for layer normalization."
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
<!-- This should link to a Dataset Card if possible. -->
We test our method on Winogrande and arc_challenge, and hellaswag
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Nvidia-A100-80Gb
- **Hours used:** VAE is trained for 4 hour and diffusion process 4 hours
- **Compute Region:** South Korea
- **Carbon Emitted:** 0.96kg
## Technical Specifications [optional]
### Model Architecture and Objective
We used Latent diffusion for weights generation, and llama3-1-8B as target architectures.
The primary objective of this weight generation process was to demonstrate that by learning only the distribution
of few layers weights9normlaization layers in this case) in an 8-billion-parameter model, it is possible to significantly enhance the
model's capabilities. Notably, this is achieved using a fraction of the computational resources and without the
need for fine-tuning, showcasing the efficiency and potential of this approach.
### Compute Infrastructure
Nvidia-A100 cluster
#### Hardware
A single Nvidia-A100
#### Software
Model is tested using lm-harness tool version 0.4.3
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
For any question contact deepauto.ai
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_DeepAutoAI__Explore_Llama-3.1-8B-Inst)
| Metric |Value|
|-------------------|----:|
|Avg. |28.50|
|IFEval (0-Shot) |77.95|
|BBH (3-Shot) |30.39|
|MATH Lvl 5 (4-Shot)|17.52|
|GPQA (0-shot) | 4.47|
|MuSR (0-shot) | 9.64|
|MMLU-PRO (5-shot) |31.02|
|
mav23/Qwen2.5-3B-Instruct-GGUF | mav23 | 2024-10-11T01:28:11Z | 25 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"text-generation",
"en",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-3B",
"base_model:quantized:Qwen/Qwen2.5-3B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-10-11T01:02:03Z | ---
license: other
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-3B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-3B
tags:
- chat
library_name: transformers
---
# Qwen2.5-3B-Instruct
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the instruction-tuned 3B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 3.09B
- Number of Paramaters (Non-Embedding): 2.77B
- Number of Layers: 36
- Number of Attention Heads (GQA): 16 for Q and 2 for KV
- Context Length: Full 32,768 tokens and generation 8192 tokens
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-3B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
``` |
ZeroXClem/Llama3.1-BestMix-Chem-Einstein-8B-Q4_0-GGUF | ZeroXClem | 2024-10-11T01:23:38Z | 36 | 1 | null | [
"gguf",
"merge",
"TIES",
"Llama3",
"BestMix",
"Chemistry",
"Einstein",
"instruction-following",
"conversational",
"long-form-generation",
"scientific",
"llama-cpp",
"gguf-my-repo",
"base_model:ZeroXClem/Llama3.1-BestMix-Chem-Einstein-8B",
"base_model:quantized:ZeroXClem/Llama3.1-BestMix-Chem-Einstein-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-11T01:23:17Z | ---
base_model: ZeroXClem/Llama3.1-BestMix-Chem-Einstein-8B
license: apache-2.0
tags:
- merge
- TIES
- Llama3
- BestMix
- Chemistry
- Einstein
- instruction-following
- conversational
- long-form-generation
- scientific
- llama-cpp
- gguf-my-repo
---
# ZeroXClem/Llama3.1-BestMix-Chem-Einstein-8B-Q4_0-GGUF
This model was converted to GGUF format from [`ZeroXClem/Llama3.1-BestMix-Chem-Einstein-8B`](https://huggingface.co/ZeroXClem/Llama3.1-BestMix-Chem-Einstein-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ZeroXClem/Llama3.1-BestMix-Chem-Einstein-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo ZeroXClem/Llama3.1-BestMix-Chem-Einstein-8B-Q4_0-GGUF --hf-file llama3.1-bestmix-chem-einstein-8b-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo ZeroXClem/Llama3.1-BestMix-Chem-Einstein-8B-Q4_0-GGUF --hf-file llama3.1-bestmix-chem-einstein-8b-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo ZeroXClem/Llama3.1-BestMix-Chem-Einstein-8B-Q4_0-GGUF --hf-file llama3.1-bestmix-chem-einstein-8b-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo ZeroXClem/Llama3.1-BestMix-Chem-Einstein-8B-Q4_0-GGUF --hf-file llama3.1-bestmix-chem-einstein-8b-q4_0.gguf -c 2048
```
|
RichardErkhov/Casual-Autopsy_-_Halu-L3-Stheno-BlackOasis-8B-gguf | RichardErkhov | 2024-10-11T01:18:48Z | 276 | 1 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-10T23:06:49Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Halu-L3-Stheno-BlackOasis-8B - GGUF
- Model creator: https://huggingface.co/Casual-Autopsy/
- Original model: https://huggingface.co/Casual-Autopsy/Halu-L3-Stheno-BlackOasis-8B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Halu-L3-Stheno-BlackOasis-8B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_Halu-L3-Stheno-BlackOasis-8B-gguf/blob/main/Halu-L3-Stheno-BlackOasis-8B.Q2_K.gguf) | Q2_K | 2.96GB |
| [Halu-L3-Stheno-BlackOasis-8B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_Halu-L3-Stheno-BlackOasis-8B-gguf/blob/main/Halu-L3-Stheno-BlackOasis-8B.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [Halu-L3-Stheno-BlackOasis-8B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_Halu-L3-Stheno-BlackOasis-8B-gguf/blob/main/Halu-L3-Stheno-BlackOasis-8B.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [Halu-L3-Stheno-BlackOasis-8B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_Halu-L3-Stheno-BlackOasis-8B-gguf/blob/main/Halu-L3-Stheno-BlackOasis-8B.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [Halu-L3-Stheno-BlackOasis-8B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_Halu-L3-Stheno-BlackOasis-8B-gguf/blob/main/Halu-L3-Stheno-BlackOasis-8B.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [Halu-L3-Stheno-BlackOasis-8B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_Halu-L3-Stheno-BlackOasis-8B-gguf/blob/main/Halu-L3-Stheno-BlackOasis-8B.Q3_K.gguf) | Q3_K | 3.74GB |
| [Halu-L3-Stheno-BlackOasis-8B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_Halu-L3-Stheno-BlackOasis-8B-gguf/blob/main/Halu-L3-Stheno-BlackOasis-8B.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [Halu-L3-Stheno-BlackOasis-8B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_Halu-L3-Stheno-BlackOasis-8B-gguf/blob/main/Halu-L3-Stheno-BlackOasis-8B.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [Halu-L3-Stheno-BlackOasis-8B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_Halu-L3-Stheno-BlackOasis-8B-gguf/blob/main/Halu-L3-Stheno-BlackOasis-8B.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [Halu-L3-Stheno-BlackOasis-8B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_Halu-L3-Stheno-BlackOasis-8B-gguf/blob/main/Halu-L3-Stheno-BlackOasis-8B.Q4_0.gguf) | Q4_0 | 4.34GB |
| [Halu-L3-Stheno-BlackOasis-8B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_Halu-L3-Stheno-BlackOasis-8B-gguf/blob/main/Halu-L3-Stheno-BlackOasis-8B.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [Halu-L3-Stheno-BlackOasis-8B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_Halu-L3-Stheno-BlackOasis-8B-gguf/blob/main/Halu-L3-Stheno-BlackOasis-8B.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [Halu-L3-Stheno-BlackOasis-8B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_Halu-L3-Stheno-BlackOasis-8B-gguf/blob/main/Halu-L3-Stheno-BlackOasis-8B.Q4_K.gguf) | Q4_K | 4.58GB |
| [Halu-L3-Stheno-BlackOasis-8B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_Halu-L3-Stheno-BlackOasis-8B-gguf/blob/main/Halu-L3-Stheno-BlackOasis-8B.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [Halu-L3-Stheno-BlackOasis-8B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_Halu-L3-Stheno-BlackOasis-8B-gguf/blob/main/Halu-L3-Stheno-BlackOasis-8B.Q4_1.gguf) | Q4_1 | 4.78GB |
| [Halu-L3-Stheno-BlackOasis-8B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_Halu-L3-Stheno-BlackOasis-8B-gguf/blob/main/Halu-L3-Stheno-BlackOasis-8B.Q5_0.gguf) | Q5_0 | 5.21GB |
| [Halu-L3-Stheno-BlackOasis-8B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_Halu-L3-Stheno-BlackOasis-8B-gguf/blob/main/Halu-L3-Stheno-BlackOasis-8B.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [Halu-L3-Stheno-BlackOasis-8B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_Halu-L3-Stheno-BlackOasis-8B-gguf/blob/main/Halu-L3-Stheno-BlackOasis-8B.Q5_K.gguf) | Q5_K | 5.34GB |
| [Halu-L3-Stheno-BlackOasis-8B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_Halu-L3-Stheno-BlackOasis-8B-gguf/blob/main/Halu-L3-Stheno-BlackOasis-8B.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [Halu-L3-Stheno-BlackOasis-8B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_Halu-L3-Stheno-BlackOasis-8B-gguf/blob/main/Halu-L3-Stheno-BlackOasis-8B.Q5_1.gguf) | Q5_1 | 5.65GB |
| [Halu-L3-Stheno-BlackOasis-8B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_Halu-L3-Stheno-BlackOasis-8B-gguf/blob/main/Halu-L3-Stheno-BlackOasis-8B.Q6_K.gguf) | Q6_K | 6.14GB |
| [Halu-L3-Stheno-BlackOasis-8B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Casual-Autopsy_-_Halu-L3-Stheno-BlackOasis-8B-gguf/blob/main/Halu-L3-Stheno-BlackOasis-8B.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
tags:
- merge
- mergekit
- lazymergekit
- not-for-all-audiences
license: llama3
language:
- en
library_name: transformers
pipeline_tag: text-generation
base_model:
- grimjim/Llama-3-Oasis-v1-OAS-8B
- Hastagaras/Halu-8B-Llama3-Blackroot
- Sao10K/L3-8B-Stheno-v3.2
---
# Halu-L3-Stheno-BlackOasis-8B
Halu-L3-Stheno-BlackOasis-8B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2)
* [grimjim/Llama-3-Oasis-v1-OAS-8B](https://huggingface.co/grimjim/Llama-3-Oasis-v1-OAS-8B)
* [Hastagaras/Halu-8B-Llama3-Blackroot](https://huggingface.co/Hastagaras/Halu-8B-Llama3-Blackroot)
* [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2)
## 🧩 Configuration
```yaml
models:
- model: grimjim/Llama-3-Oasis-v1-OAS-8B
- model: Hastagaras/Halu-8B-Llama3-Blackroot
- model: Sao10K/L3-8B-Stheno-v3.2
merge_method: model_stock
base_model: Sao10K/L3-8B-Stheno-v3.2
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Casual-Autopsy/Halu-L3-Stheno-BlackOasis-8B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
kbulutozler/distilbert-base-uncased-FT-ner-NCBI-disease | kbulutozler | 2024-10-11T01:17:44Z | 198 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"token-classification",
"NCBI-disease",
"NER",
"en",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-10-11T01:13:19Z | ---
library_name: transformers
tags:
- NCBI-disease
- NER
license: apache-2.0
language:
- en
metrics:
- seqeval
base_model:
- distilbert/distilbert-base-uncased
---
# Model Card for Model ID
Fine-tuned distilbert model. Trained on train set of NCBI-disease dataset taken from [BLURB](https://microsoft.github.io/BLURB/tasks.html).
## Model Details
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/kbulutozler/medical-llm-benchmark
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
Train set of NCBI-disease dataset.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
Classical fine-tuning.
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
learning_rate=5e-5
per_device_train_batch_size=16
per_device_eval_batch_size=16
num_train_epochs=3
weight_decay=0.01
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
Test set of NCBI-disease dataset.
### Results
Precision: 0.81
Recall: 0.86
Micro-F1: 0.84
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
- **Hardware Type:** 1xRTX A4000
- **Hours used:** 00:06:00
|
TroyDoesAI/BlackSheep-5B | TroyDoesAI | 2024-10-11T01:16:41Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:TroyDoesAI/BlackSheep-RAG-5B",
"base_model:merge:TroyDoesAI/BlackSheep-RAG-5B",
"base_model:TroyDoesAI/Llama-3.2-4B",
"base_model:merge:TroyDoesAI/Llama-3.2-4B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-11T00:43:37Z | ---
base_model:
- TroyDoesAI/Llama-3.2-4B
- TroyDoesAI/BlackSheep-RAG-4B
library_name: transformers
tags:
- mergekit
- merge
---
# BlackSheep
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [TroyDoesAI/Llama-3.2-4B](https://huggingface.co/TroyDoesAI/Llama-3.2-4B)
* [TroyDoesAI/BlackSheep-RAG-4B](https://huggingface.co/TroyDoesAI/BlackSheep-RAG-4B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: TroyDoesAI/BlackSheep-RAG-4B
layer_range: [0, 42]
- model: TroyDoesAI/Llama-3.2-4B
layer_range: [0, 42]
merge_method: slerp
base_model: TroyDoesAI/Llama-3.2-4B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.50 # fallback for rest of tensors
dtype: bfloat16
```
|
haorandai/png_randomnoise_fruit_solid_gray_20_with20constraints | haorandai | 2024-10-11T01:14:48Z | 29 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-10-11T01:10:26Z | ---
base_model: CompVis/stable-diffusion-v1-4
library_name: diffusers
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Text-to-image finetuning - haorandai/png_randomnoise_fruit_solid_gray_20_with20constraints
This pipeline was finetuned from **CompVis/stable-diffusion-v1-4** on the **haorandai/png_randomnoise_fruit_solid_gray_20_with20constraints** dataset. Below are some example images generated with the finetuned pipeline using the following prompts: None:
## Pipeline usage
You can use the pipeline like so:
```python
from diffusers import DiffusionPipeline
import torch
pipeline = DiffusionPipeline.from_pretrained("haorandai/png_randomnoise_fruit_solid_gray_20_with20constraints", torch_dtype=torch.float16)
prompt = "None"
image = pipeline(prompt).images[0]
image.save("my_image.png")
```
## Training info
These are the key hyperparameters used during training:
* Epochs: 20
* Learning rate: 1e-05
* Batch size: 1
* Gradient accumulation steps: 4
* Image resolution: 224
* Mixed-precision: fp16
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
Subsets and Splits