modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-05-25 06:27:05
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 476
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-05-25 06:22:25
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
MinaMila/gemma2_2b_LoRa_ACSEmployment_2_cfda_ep1_22 | MinaMila | 2025-05-22T01:19:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-22T01:19:11Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
Video-18-katrina-lim-kiffy-katrinalim123-k/Video-18-katrina-lim-kiffy-katrinalim123-katrina-lim-tg-telegram1 | Video-18-katrina-lim-kiffy-katrinalim123-k | 2025-05-22T01:16:34Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-22T01:15:27Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
Rorschach4153/so101_60_new | Rorschach4153 | 2025-05-22T01:15:47Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2025-05-22T01:14:39Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
jmalejandrob79/nbmafckd2k5 | jmalejandrob79 | 2025-05-22T01:13:07Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-22T00:36:20Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
Video-18-katrina-lim-kiffy-katrinalim123-k/Video-18-katrina-lim-kiffy-katrinalim123-katrina-lim-tg-telegram | Video-18-katrina-lim-kiffy-katrinalim123-k | 2025-05-22T01:11:31Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-22T01:09:04Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
gagein/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-thorny_lightfooted_panda | gagein | 2025-05-22T01:11:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am thorny lightfooted panda",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-21T19:04:42Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
RRB47157455/Eu | RRB47157455 | 2025-05-22T01:07:19Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-22T01:07:19Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
haihp02/test-reward-app | haihp02 | 2025-05-22T01:06:52Z | 0 | 0 | null | [
"region:us"
] | null | 2025-03-19T09:26:23Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
keioio/chatbot_english_llama_3_1_instruction_v2 | keioio | 2025-05-22T01:04:52Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-22T01:00:06Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
BLEACH366/P2DFlow | BLEACH366 | 2025-05-22T01:04:20Z | 0 | 1 | null | [
"arxiv:2411.17196",
"region:us"
] | null | 2025-03-19T01:57:18Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
jiil89/ingr_classify_test | jiil89 | 2025-05-22T01:04:16Z | 0 | 0 | null | [
"safetensors",
"gemma3",
"license:apache-2.0",
"region:us"
] | null | 2025-05-22T01:01:28Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
jakatingkir/9995800f-16cb-4af2-93ee-15ffb09f7ae8 | jakatingkir | 2025-05-22T01:03:50Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-22T00:41:08Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
Priyanship/base_sami_22k_ftmix_ep20 | Priyanship | 2025-05-22T01:02:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-05-21T16:47:22Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
sannadu/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-galloping_bold_ram | sannadu | 2025-05-22T01:02:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am galloping bold ram",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-10T03:59:36Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
ashsic/bcv | ashsic | 2025-05-22T01:02:03Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-22T01:02:03Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
mradermacher/E1-Code-14B-i1-GGUF | mradermacher | 2025-05-22T01:00:09Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:agentica-org/DeepCoder-Preview-Dataset",
"base_model:Salesforce/E1-Code-14B",
"base_model:quantized:Salesforce/E1-Code-14B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-21T20:09:21Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
Kurosawama/Llama-3.1-8B-DPO-beamsearch-align | Kurosawama | 2025-05-22T00:58:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"trl",
"dpo",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-22T00:36:55Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
raraujo/merlinite-7b-lab-finetuned | raraujo | 2025-05-22T00:55:16Z | 26 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:instructlab/merlinite-7b-lab",
"base_model:quantized:instructlab/merlinite-7b-lab",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-07T22:17:32Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
ascust/tmp | ascust | 2025-05-22T00:55:12Z | 0 | 0 | null | [
"region:us"
] | null | 2025-04-21T05:18:32Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
izzcw/large_crafting_sft_success | izzcw | 2025-05-22T00:54:12Z | 80 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-08T00:37:30Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
iloveroraa/STEMS | iloveroraa | 2025-05-22T00:53:25Z | 0 | 0 | null | [
"license:unknown",
"region:us"
] | null | 2025-05-22T00:48:00Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
alentzy/logic_model | alentzy | 2025-05-22T00:52:25Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"region:us"
] | null | 2025-05-22T00:43:37Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
Priyanship/base_sami_22k_ftpseudo | Priyanship | 2025-05-22T00:51:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-05-21T13:43:08Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
RafaelLima123/YAN | RafaelLima123 | 2025-05-22T00:50:54Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-05-22T00:23:19Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
HelenHenny66/RK_crackle | HelenHenny66 | 2025-05-22T00:50:38Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-05-22T00:49:23Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
mlfoundations-dev/e1_science_longest_r1_10k | mlfoundations-dev | 2025-05-22T00:47:46Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"region:us"
] | null | 2025-05-21T22:06:51Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
Bell-Herald/reft_and_lora_a | Bell-Herald | 2025-05-22T00:47:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-21T01:36:47Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
baseten/whisper_trt_large_v3_turbo_test_NVIDIA_L4_0_16_0 | baseten | 2025-05-22T00:45:09Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-22T00:17:14Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
sameerreddy-predibase/internvl3-8b-trl-sft-ChartQA | sameerreddy-predibase | 2025-05-22T00:44:02Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-22T00:44:02Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
mattbacker/02061525-7dcd-4f9f-b19d-eb21c868d8ce | mattbacker | 2025-05-22T00:43:40Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-22T00:21:36Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
FormlessAI/0800c410-145e-478f-b080-e35493cfc934 | FormlessAI | 2025-05-22T00:43:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:NousResearch/Nous-Capybara-7B-V1.9",
"base_model:finetune:NousResearch/Nous-Capybara-7B-V1.9",
"endpoints_compatible",
"region:us"
] | null | 2025-05-21T20:41:07Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
yunjae-won/mp_mistral7bv3_sft_dpo_beta2e-2_epoch1_ratio_dpor_multisample | yunjae-won | 2025-05-22T00:43:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-22T00:39:31Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
ghost613/VC-kss-decoder-char | ghost613 | 2025-05-22T00:43:24Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"whisper",
"region:us"
] | null | 2025-05-21T06:22:51Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
stephenoptins/clone-replicate | stephenoptins | 2025-05-22T00:43:00Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-05-21T21:14:48Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
Bell-Herald/Bell-Herald | Bell-Herald | 2025-05-22T00:41:26Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-22T00:41:26Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
stefanoscotta/gemma_multimodal_Segm_V2_continuation_continuation_5-7ep | stefanoscotta | 2025-05-22T00:41:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-4b-pt",
"base_model:finetune:google/gemma-3-4b-pt",
"endpoints_compatible",
"region:us"
] | null | 2025-05-21T12:52:57Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
mradermacher/Amadeus-Verbo-qwen2.5-0.5B-PT-BR-Instruct-GGUF | mradermacher | 2025-05-22T00:41:14Z | 29 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"pt",
"base_model:amadeusai/Amadeus-Verbo-MI-Qwen-2.5-0.5B-PT-BR-Instruct-Experimental",
"base_model:quantized:amadeusai/Amadeus-Verbo-MI-Qwen-2.5-0.5B-PT-BR-Instruct-Experimental",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-19T21:32:48Z | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
newtts2017/f292974x | newtts2017 | 2025-05-22T00:37:55Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-22T00:26:40Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: f292974x
---
# F292974X
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `f292974x` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "f292974x",
"lora_weights": "https://huggingface.co/newtts2017/f292974x/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('newtts2017/f292974x', weight_name='lora.safetensors')
image = pipeline('f292974x').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/newtts2017/f292974x/discussions) to add images that show off what you’ve made with this LoRA.
|
mradermacher/Manticore-32B-i1-GGUF | mradermacher | 2025-05-22T00:37:55Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"reasoning",
"mathematics",
"coding",
"en",
"dataset:open-thoughts/OpenThoughts2-1M",
"dataset:open-r1/OpenR1-Math-220k",
"dataset:nvidia/OpenMathReasoning",
"base_model:Daemontatox/Manticore-32B",
"base_model:quantized:Daemontatox/Manticore-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-21T21:15:06Z | ---
base_model: Daemontatox/Manticore-32B
datasets:
- open-thoughts/OpenThoughts2-1M
- open-r1/OpenR1-Math-220k
- nvidia/OpenMathReasoning
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- reasoning
- mathematics
- coding
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Daemontatox/Manticore-32B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Manticore-32B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Manticore-32B-i1-GGUF/resolve/main/Manticore-32B.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Manticore-32B-i1-GGUF/resolve/main/Manticore-32B.i1-IQ1_M.gguf) | i1-IQ1_M | 8.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Manticore-32B-i1-GGUF/resolve/main/Manticore-32B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/Manticore-32B-i1-GGUF/resolve/main/Manticore-32B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/Manticore-32B-i1-GGUF/resolve/main/Manticore-32B.i1-IQ2_S.gguf) | i1-IQ2_S | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Manticore-32B-i1-GGUF/resolve/main/Manticore-32B.i1-IQ2_M.gguf) | i1-IQ2_M | 11.5 | |
| [GGUF](https://huggingface.co/mradermacher/Manticore-32B-i1-GGUF/resolve/main/Manticore-32B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 11.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Manticore-32B-i1-GGUF/resolve/main/Manticore-32B.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Manticore-32B-i1-GGUF/resolve/main/Manticore-32B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Manticore-32B-i1-GGUF/resolve/main/Manticore-32B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/Manticore-32B-i1-GGUF/resolve/main/Manticore-32B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Manticore-32B-i1-GGUF/resolve/main/Manticore-32B.i1-IQ3_S.gguf) | i1-IQ3_S | 14.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Manticore-32B-i1-GGUF/resolve/main/Manticore-32B.i1-IQ3_M.gguf) | i1-IQ3_M | 15.0 | |
| [GGUF](https://huggingface.co/mradermacher/Manticore-32B-i1-GGUF/resolve/main/Manticore-32B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Manticore-32B-i1-GGUF/resolve/main/Manticore-32B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Manticore-32B-i1-GGUF/resolve/main/Manticore-32B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/Manticore-32B-i1-GGUF/resolve/main/Manticore-32B.i1-Q4_0.gguf) | i1-Q4_0 | 18.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Manticore-32B-i1-GGUF/resolve/main/Manticore-32B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Manticore-32B-i1-GGUF/resolve/main/Manticore-32B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 19.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Manticore-32B-i1-GGUF/resolve/main/Manticore-32B.i1-Q4_1.gguf) | i1-Q4_1 | 20.7 | |
| [GGUF](https://huggingface.co/mradermacher/Manticore-32B-i1-GGUF/resolve/main/Manticore-32B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/Manticore-32B-i1-GGUF/resolve/main/Manticore-32B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.3 | |
| [GGUF](https://huggingface.co/mradermacher/Manticore-32B-i1-GGUF/resolve/main/Manticore-32B.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/vanilla-cn-roleplay-0.2-i1-GGUF | mradermacher | 2025-05-22T00:37:54Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"roleplay",
"Roleplay",
"roleplaying",
"zh",
"dataset:ScratchThePlan/cn-role-play-we-with-no-tomorrow-fell-in-love-yesterday",
"dataset:ScratchThePlan/novel_cn_roleplay_dataset_liars_lips_fall_apart_in_love",
"base_model:ScratchThePlan/vanilla-cn-roleplay-0.2",
"base_model:quantized:ScratchThePlan/vanilla-cn-roleplay-0.2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-21T19:46:36Z | ---
base_model: ScratchThePlan/vanilla-cn-roleplay-0.2
datasets:
- ScratchThePlan/cn-role-play-we-with-no-tomorrow-fell-in-love-yesterday
- ScratchThePlan/novel_cn_roleplay_dataset_liars_lips_fall_apart_in_love
language:
- zh
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- roleplay
- Roleplay
- roleplaying
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ScratchThePlan/vanilla-cn-roleplay-0.2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/vanilla-cn-roleplay-0.2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/vanilla-cn-roleplay-0.2-i1-GGUF/resolve/main/vanilla-cn-roleplay-0.2.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/vanilla-cn-roleplay-0.2-i1-GGUF/resolve/main/vanilla-cn-roleplay-0.2.i1-IQ1_M.gguf) | i1-IQ1_M | 3.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/vanilla-cn-roleplay-0.2-i1-GGUF/resolve/main/vanilla-cn-roleplay-0.2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/vanilla-cn-roleplay-0.2-i1-GGUF/resolve/main/vanilla-cn-roleplay-0.2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/vanilla-cn-roleplay-0.2-i1-GGUF/resolve/main/vanilla-cn-roleplay-0.2.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/vanilla-cn-roleplay-0.2-i1-GGUF/resolve/main/vanilla-cn-roleplay-0.2.i1-IQ2_M.gguf) | i1-IQ2_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/vanilla-cn-roleplay-0.2-i1-GGUF/resolve/main/vanilla-cn-roleplay-0.2.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/vanilla-cn-roleplay-0.2-i1-GGUF/resolve/main/vanilla-cn-roleplay-0.2.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/vanilla-cn-roleplay-0.2-i1-GGUF/resolve/main/vanilla-cn-roleplay-0.2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/vanilla-cn-roleplay-0.2-i1-GGUF/resolve/main/vanilla-cn-roleplay-0.2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/vanilla-cn-roleplay-0.2-i1-GGUF/resolve/main/vanilla-cn-roleplay-0.2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/vanilla-cn-roleplay-0.2-i1-GGUF/resolve/main/vanilla-cn-roleplay-0.2.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/vanilla-cn-roleplay-0.2-i1-GGUF/resolve/main/vanilla-cn-roleplay-0.2.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/vanilla-cn-roleplay-0.2-i1-GGUF/resolve/main/vanilla-cn-roleplay-0.2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/vanilla-cn-roleplay-0.2-i1-GGUF/resolve/main/vanilla-cn-roleplay-0.2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/vanilla-cn-roleplay-0.2-i1-GGUF/resolve/main/vanilla-cn-roleplay-0.2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/vanilla-cn-roleplay-0.2-i1-GGUF/resolve/main/vanilla-cn-roleplay-0.2.i1-IQ4_NL.gguf) | i1-IQ4_NL | 8.6 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/vanilla-cn-roleplay-0.2-i1-GGUF/resolve/main/vanilla-cn-roleplay-0.2.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/vanilla-cn-roleplay-0.2-i1-GGUF/resolve/main/vanilla-cn-roleplay-0.2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/vanilla-cn-roleplay-0.2-i1-GGUF/resolve/main/vanilla-cn-roleplay-0.2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/vanilla-cn-roleplay-0.2-i1-GGUF/resolve/main/vanilla-cn-roleplay-0.2.i1-Q4_1.gguf) | i1-Q4_1 | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/vanilla-cn-roleplay-0.2-i1-GGUF/resolve/main/vanilla-cn-roleplay-0.2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/vanilla-cn-roleplay-0.2-i1-GGUF/resolve/main/vanilla-cn-roleplay-0.2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/vanilla-cn-roleplay-0.2-i1-GGUF/resolve/main/vanilla-cn-roleplay-0.2.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Goraint/medgemma-4b-it-MLX-AWQ-4bit | Goraint | 2025-05-22T00:36:24Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"gemma3",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"license:gemma",
"region:us"
] | null | 2025-05-22T00:29:42Z | ---
license: gemma
base_model:
- google/medgemma-4b-it
library_name: mlx
---
## 📌 Overview
A 4-bit AWQ quantized version of **Google/medgemma-4b-it** optimized for efficient inference using the **MLX library**, designed to handle long-context tasks (192k tokens) with reduced resource usage. Retains core capabilities of medgemma-4b while enabling deployment on edge devices.
|
heloisams13/Helo | heloisams13 | 2025-05-22T00:36:22Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-22T00:23:36Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Helo
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/heloisams13/Helo/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('heloisams13/Helo', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/heloisams13/Helo/discussions) to add images that show off what you’ve made with this LoRA.
|
mradermacher/AV-FI-Qwen2.5-1.5B-PT-BR-Instruct-GGUF | mradermacher | 2025-05-22T00:36:06Z | 49 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"pt",
"base_model:amadeusai/Amadeus-Verbo-FI-Qwen2.5-1.5B-PT-BR-Instruct",
"base_model:quantized:amadeusai/Amadeus-Verbo-FI-Qwen2.5-1.5B-PT-BR-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-21T11:20:23Z | ---
base_model: amadeusai/Amadeus-Verbo-FI-Qwen2.5-1.5B-PT-BR-Instruct
language:
- pt
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/amadeusai/Amadeus-Verbo-FI-Qwen2.5-1.5B-PT-BR-Instruct
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/AV-FI-Qwen2.5-1.5B-PT-BR-Instruct-GGUF/resolve/main/AV-FI-Qwen2.5-1.5B-PT-BR-Instruct.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/AV-FI-Qwen2.5-1.5B-PT-BR-Instruct-GGUF/resolve/main/AV-FI-Qwen2.5-1.5B-PT-BR-Instruct.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/AV-FI-Qwen2.5-1.5B-PT-BR-Instruct-GGUF/resolve/main/AV-FI-Qwen2.5-1.5B-PT-BR-Instruct.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/AV-FI-Qwen2.5-1.5B-PT-BR-Instruct-GGUF/resolve/main/AV-FI-Qwen2.5-1.5B-PT-BR-Instruct.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/AV-FI-Qwen2.5-1.5B-PT-BR-Instruct-GGUF/resolve/main/AV-FI-Qwen2.5-1.5B-PT-BR-Instruct.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/AV-FI-Qwen2.5-1.5B-PT-BR-Instruct-GGUF/resolve/main/AV-FI-Qwen2.5-1.5B-PT-BR-Instruct.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AV-FI-Qwen2.5-1.5B-PT-BR-Instruct-GGUF/resolve/main/AV-FI-Qwen2.5-1.5B-PT-BR-Instruct.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AV-FI-Qwen2.5-1.5B-PT-BR-Instruct-GGUF/resolve/main/AV-FI-Qwen2.5-1.5B-PT-BR-Instruct.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/AV-FI-Qwen2.5-1.5B-PT-BR-Instruct-GGUF/resolve/main/AV-FI-Qwen2.5-1.5B-PT-BR-Instruct.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/AV-FI-Qwen2.5-1.5B-PT-BR-Instruct-GGUF/resolve/main/AV-FI-Qwen2.5-1.5B-PT-BR-Instruct.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/AV-FI-Qwen2.5-1.5B-PT-BR-Instruct-GGUF/resolve/main/AV-FI-Qwen2.5-1.5B-PT-BR-Instruct.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/AV-FI-Qwen2.5-1.5B-PT-BR-Instruct-GGUF/resolve/main/AV-FI-Qwen2.5-1.5B-PT-BR-Instruct.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
fizzzzz9/cas4133-sft | fizzzzz9 | 2025-05-22T00:34:22Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-22T00:34:17Z | ---
license: apache-2.0
---
|
YuchenLi01/generatedSoftQwen2.5MathPRM72BMoreNoGT_Qwen2.5-1.5BInstruct_dpo_ebs32_lr5e-07_beta0.4_42 | YuchenLi01 | 2025-05-22T00:33:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:YuchenLi01/MATH_Qwen2.5-1.5BInstruct_Soft_DPO_Qwen2.5MathPRM72B_MoreNoGT",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-21T21:24:30Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B-Instruct
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- YuchenLi01/MATH_Qwen2.5-1.5BInstruct_Soft_DPO_Qwen2.5MathPRM72B_MoreNoGT
model-index:
- name: generatedSoftQwen2.5MathPRM72BMoreNoGT_Qwen2.5-1.5BInstruct_dpo_ebs32_lr5e-07_beta0.4_42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# generatedSoftQwen2.5MathPRM72BMoreNoGT_Qwen2.5-1.5BInstruct_dpo_ebs32_lr5e-07_beta0.4_42
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the YuchenLi01/MATH_Qwen2.5-1.5BInstruct_Soft_DPO_Qwen2.5MathPRM72B_MoreNoGT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5241
- Rewards/chosen: -0.3073
- Rewards/rejected: -1.0873
- Rewards/accuracies: 0.7128
- Rewards/margins: 0.7800
- Logps/rejected: -48.4095
- Logps/chosen: -40.1228
- Logits/rejected: -2.2746
- Logits/chosen: -2.4057
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.7033 | 0.0151 | 20 | 0.7003 | 0.0046 | -0.0023 | 0.4966 | 0.0069 | -45.6971 | -39.3432 | -2.2676 | -2.3744 |
| 0.7182 | 0.0302 | 40 | 0.6980 | -0.0061 | -0.0025 | 0.5034 | -0.0036 | -45.6976 | -39.3699 | -2.2655 | -2.3719 |
| 0.7255 | 0.0453 | 60 | 0.6940 | -0.0081 | -0.0127 | 0.5 | 0.0046 | -45.7231 | -39.3750 | -2.2628 | -2.3692 |
| 0.6908 | 0.0604 | 80 | 0.6946 | -0.0328 | -0.0406 | 0.5169 | 0.0077 | -45.7927 | -39.4367 | -2.2570 | -2.3636 |
| 0.6945 | 0.0755 | 100 | 0.6905 | -0.0700 | -0.0781 | 0.5135 | 0.0081 | -45.8866 | -39.5297 | -2.2510 | -2.3579 |
| 0.6633 | 0.0906 | 120 | 0.6791 | -0.1371 | -0.1833 | 0.5743 | 0.0462 | -46.1496 | -39.6973 | -2.2375 | -2.3456 |
| 0.6783 | 0.1057 | 140 | 0.6685 | -0.2414 | -0.2990 | 0.5709 | 0.0576 | -46.4390 | -39.9582 | -2.2215 | -2.3309 |
| 0.6663 | 0.1208 | 160 | 0.6600 | -0.3085 | -0.4138 | 0.6182 | 0.1053 | -46.7258 | -40.1259 | -2.2090 | -2.3198 |
| 0.6267 | 0.1360 | 180 | 0.6494 | -0.3873 | -0.5245 | 0.625 | 0.1372 | -47.0026 | -40.3228 | -2.1959 | -2.3083 |
| 0.6675 | 0.1511 | 200 | 0.6408 | -0.3742 | -0.5414 | 0.6723 | 0.1673 | -47.0449 | -40.2901 | -2.1975 | -2.3108 |
| 0.5976 | 0.1662 | 220 | 0.6360 | -0.3183 | -0.4938 | 0.6284 | 0.1754 | -46.9258 | -40.1505 | -2.2149 | -2.3297 |
| 0.6237 | 0.1813 | 240 | 0.6289 | -0.2506 | -0.4531 | 0.6385 | 0.2025 | -46.8240 | -39.9812 | -2.2239 | -2.3386 |
| 0.6652 | 0.1964 | 260 | 0.6242 | -0.2810 | -0.4885 | 0.6554 | 0.2076 | -46.9127 | -40.0571 | -2.2303 | -2.3465 |
| 0.5528 | 0.2115 | 280 | 0.6190 | -0.2905 | -0.5201 | 0.6453 | 0.2296 | -46.9915 | -40.0808 | -2.2328 | -2.3502 |
| 0.5824 | 0.2266 | 300 | 0.6137 | -0.2687 | -0.5307 | 0.6723 | 0.2620 | -47.0181 | -40.0265 | -2.2371 | -2.3558 |
| 0.6621 | 0.2417 | 320 | 0.6086 | -0.2982 | -0.6178 | 0.6791 | 0.3197 | -47.2359 | -40.1001 | -2.2366 | -2.3565 |
| 0.5479 | 0.2568 | 340 | 0.6053 | -0.2955 | -0.6115 | 0.6689 | 0.3161 | -47.2202 | -40.0933 | -2.2425 | -2.3627 |
| 0.5145 | 0.2719 | 360 | 0.6002 | -0.3154 | -0.6574 | 0.6655 | 0.3420 | -47.3349 | -40.1432 | -2.2457 | -2.3675 |
| 0.573 | 0.2870 | 380 | 0.5934 | -0.3112 | -0.6809 | 0.6689 | 0.3697 | -47.3935 | -40.1326 | -2.2511 | -2.3735 |
| 0.581 | 0.3021 | 400 | 0.5924 | -0.2674 | -0.6619 | 0.6926 | 0.3945 | -47.3461 | -40.0231 | -2.2569 | -2.3795 |
| 0.6143 | 0.3172 | 420 | 0.5832 | -0.2503 | -0.6633 | 0.6824 | 0.4130 | -47.3496 | -39.9805 | -2.2662 | -2.3895 |
| 0.5916 | 0.3323 | 440 | 0.5788 | -0.3041 | -0.7508 | 0.6926 | 0.4466 | -47.5683 | -40.1150 | -2.2576 | -2.3808 |
| 0.5879 | 0.3474 | 460 | 0.5775 | -0.3086 | -0.7588 | 0.6757 | 0.4502 | -47.5884 | -40.1260 | -2.2511 | -2.3746 |
| 0.5115 | 0.3625 | 480 | 0.5757 | -0.3601 | -0.8423 | 0.6892 | 0.4822 | -47.7972 | -40.2549 | -2.2422 | -2.3660 |
| 0.6289 | 0.3776 | 500 | 0.5724 | -0.3753 | -0.8725 | 0.6892 | 0.4972 | -47.8726 | -40.2928 | -2.2424 | -2.3676 |
| 0.5719 | 0.3927 | 520 | 0.5671 | -0.3709 | -0.9016 | 0.6993 | 0.5307 | -47.9453 | -40.2819 | -2.2382 | -2.3638 |
| 0.6288 | 0.4079 | 540 | 0.5665 | -0.2982 | -0.8504 | 0.6959 | 0.5523 | -47.8175 | -40.1001 | -2.2548 | -2.3808 |
| 0.5127 | 0.4230 | 560 | 0.5637 | -0.3104 | -0.8975 | 0.7027 | 0.5871 | -47.9352 | -40.1307 | -2.2563 | -2.3827 |
| 0.4284 | 0.4381 | 580 | 0.5600 | -0.3551 | -0.9401 | 0.7027 | 0.5850 | -48.0417 | -40.2425 | -2.2512 | -2.3781 |
| 0.5243 | 0.4532 | 600 | 0.5551 | -0.3152 | -0.9416 | 0.6926 | 0.6264 | -48.0454 | -40.1427 | -2.2581 | -2.3851 |
| 0.6616 | 0.4683 | 620 | 0.5541 | -0.3150 | -0.9392 | 0.7027 | 0.6242 | -48.0394 | -40.1422 | -2.2611 | -2.3884 |
| 0.6297 | 0.4834 | 640 | 0.5525 | -0.2586 | -0.9025 | 0.7196 | 0.6438 | -47.9475 | -40.0012 | -2.2740 | -2.4016 |
| 0.6808 | 0.4985 | 660 | 0.5502 | -0.2965 | -0.9631 | 0.6926 | 0.6666 | -48.0990 | -40.0959 | -2.2628 | -2.3909 |
| 0.5353 | 0.5136 | 680 | 0.5486 | -0.3485 | -1.0023 | 0.7095 | 0.6538 | -48.1972 | -40.2259 | -2.2587 | -2.3874 |
| 0.539 | 0.5287 | 700 | 0.5476 | -0.3661 | -1.0367 | 0.6993 | 0.6706 | -48.2830 | -40.2698 | -2.2585 | -2.3881 |
| 0.4013 | 0.5438 | 720 | 0.5453 | -0.4023 | -1.0902 | 0.7128 | 0.6879 | -48.4169 | -40.3604 | -2.2498 | -2.3791 |
| 0.5562 | 0.5589 | 740 | 0.5415 | -0.3945 | -1.0905 | 0.7095 | 0.6961 | -48.4177 | -40.3408 | -2.2523 | -2.3813 |
| 0.5544 | 0.5740 | 760 | 0.5409 | -0.4088 | -1.1076 | 0.6959 | 0.6988 | -48.4604 | -40.3767 | -2.2470 | -2.3766 |
| 0.5809 | 0.5891 | 780 | 0.5417 | -0.4001 | -1.1025 | 0.7027 | 0.7024 | -48.4477 | -40.3549 | -2.2540 | -2.3846 |
| 0.397 | 0.6042 | 800 | 0.5393 | -0.3618 | -1.0662 | 0.6959 | 0.7043 | -48.3568 | -40.2592 | -2.2517 | -2.3817 |
| 0.4548 | 0.6193 | 820 | 0.5381 | -0.3309 | -1.0533 | 0.6892 | 0.7224 | -48.3246 | -40.1819 | -2.2600 | -2.3907 |
| 0.525 | 0.6344 | 840 | 0.5369 | -0.2863 | -1.0185 | 0.7230 | 0.7322 | -48.2376 | -40.0704 | -2.2652 | -2.3957 |
| 0.5139 | 0.6495 | 860 | 0.5349 | -0.2988 | -1.0277 | 0.7128 | 0.7289 | -48.2607 | -40.1016 | -2.2740 | -2.4050 |
| 0.4067 | 0.6647 | 880 | 0.5363 | -0.3147 | -1.0460 | 0.7128 | 0.7313 | -48.3063 | -40.1413 | -2.2735 | -2.4051 |
| 0.4624 | 0.6798 | 900 | 0.5326 | -0.2782 | -1.0348 | 0.7162 | 0.7566 | -48.2783 | -40.0501 | -2.2769 | -2.4084 |
| 0.5342 | 0.6949 | 920 | 0.5343 | -0.2946 | -1.0345 | 0.7128 | 0.7398 | -48.2775 | -40.0912 | -2.2722 | -2.4030 |
| 0.4489 | 0.7100 | 940 | 0.5334 | -0.2873 | -1.0315 | 0.7095 | 0.7441 | -48.2700 | -40.0729 | -2.2758 | -2.4068 |
| 0.584 | 0.7251 | 960 | 0.5308 | -0.2673 | -1.0195 | 0.7264 | 0.7522 | -48.2401 | -40.0229 | -2.2776 | -2.4089 |
| 0.3874 | 0.7402 | 980 | 0.5320 | -0.2839 | -1.0383 | 0.7128 | 0.7543 | -48.2870 | -40.0645 | -2.2774 | -2.4087 |
| 0.936 | 0.7553 | 1000 | 0.5300 | -0.2841 | -1.0540 | 0.7128 | 0.7699 | -48.3263 | -40.0649 | -2.2778 | -2.4089 |
| 0.4904 | 0.7704 | 1020 | 0.5288 | -0.2762 | -1.0422 | 0.7095 | 0.7660 | -48.2968 | -40.0452 | -2.2794 | -2.4103 |
| 0.5106 | 0.7855 | 1040 | 0.5268 | -0.2817 | -1.0518 | 0.7162 | 0.7701 | -48.3209 | -40.0589 | -2.2784 | -2.4093 |
| 0.4281 | 0.8006 | 1060 | 0.5280 | -0.2981 | -1.0670 | 0.7230 | 0.7689 | -48.3589 | -40.0998 | -2.2753 | -2.4065 |
| 0.4574 | 0.8157 | 1080 | 0.5280 | -0.3024 | -1.0718 | 0.7297 | 0.7695 | -48.3710 | -40.1106 | -2.2786 | -2.4098 |
| 0.6234 | 0.8308 | 1100 | 0.5273 | -0.3079 | -1.0759 | 0.7128 | 0.7679 | -48.3810 | -40.1245 | -2.2720 | -2.4032 |
| 0.4874 | 0.8459 | 1120 | 0.5269 | -0.3017 | -1.0821 | 0.7061 | 0.7804 | -48.3966 | -40.1090 | -2.2717 | -2.4026 |
| 0.6614 | 0.8610 | 1140 | 0.5244 | -0.3120 | -1.0845 | 0.7162 | 0.7725 | -48.4026 | -40.1346 | -2.2769 | -2.4080 |
| 0.5131 | 0.8761 | 1160 | 0.5271 | -0.3124 | -1.0878 | 0.6993 | 0.7755 | -48.4110 | -40.1356 | -2.2775 | -2.4090 |
| 0.799 | 0.8912 | 1180 | 0.5268 | -0.3164 | -1.0867 | 0.7196 | 0.7702 | -48.4080 | -40.1458 | -2.2746 | -2.4057 |
| 0.6439 | 0.9063 | 1200 | 0.5277 | -0.3064 | -1.0850 | 0.7162 | 0.7786 | -48.4037 | -40.1205 | -2.2754 | -2.4067 |
| 0.5469 | 0.9215 | 1220 | 0.5252 | -0.3056 | -1.0953 | 0.7196 | 0.7897 | -48.4295 | -40.1186 | -2.2754 | -2.4069 |
| 0.4788 | 0.9366 | 1240 | 0.5271 | -0.3270 | -1.0834 | 0.7095 | 0.7564 | -48.3999 | -40.1722 | -2.2733 | -2.4042 |
| 0.5101 | 0.9517 | 1260 | 0.5239 | -0.2998 | -1.0810 | 0.7196 | 0.7811 | -48.3937 | -40.1042 | -2.2737 | -2.4045 |
| 0.5103 | 0.9668 | 1280 | 0.5241 | -0.3124 | -1.0873 | 0.7264 | 0.7749 | -48.4095 | -40.1356 | -2.2737 | -2.4047 |
| 0.5847 | 0.9819 | 1300 | 0.5244 | -0.3142 | -1.0984 | 0.7196 | 0.7842 | -48.4373 | -40.1401 | -2.2737 | -2.4046 |
| 0.6099 | 0.9970 | 1320 | 0.5251 | -0.3061 | -1.0845 | 0.7128 | 0.7784 | -48.4025 | -40.1199 | -2.2745 | -2.4056 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.5.1+cu121
- Datasets 3.5.0
- Tokenizers 0.20.3
|
infmourir/MUSIC_pretrained | infmourir | 2025-05-22T00:33:13Z | 0 | 0 | null | [
"safetensors",
"text-classification",
"en",
"de",
"fr",
"ru",
"base_model:google/gemma-3-12b-it",
"base_model:finetune:google/gemma-3-12b-it",
"license:apache-2.0",
"region:us"
] | text-classification | 2025-05-22T00:07:41Z | ---
license: apache-2.0
language:
- en
- de
- fr
- ru
metrics:
- f1
base_model:
- google/gemma-3-12b-it
- google/gemma-3-27b-it
pipeline_tag: text-classification
--- |
g-assismoraes/gemma-3-4b-it-fpi-alpha-4.0-var-tiebe | g-assismoraes | 2025-05-22T00:32:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-22T00:28:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
xw17/gemma-2-2b-it_finetuned__optimized1_universal_FT | xw17 | 2025-05-22T00:31:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-22T00:28:00Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Mungert/Llama-3.1-Nemotron-Nano-4B-v1.1-GGUF | Mungert | 2025-05-22T00:28:38Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"nvidia",
"llama-3",
"pytorch",
"text-generation",
"en",
"dataset:nvidia/Llama-Nemotron-Post-Training-Dataset",
"arxiv:2408.11796",
"arxiv:2502.00203",
"arxiv:2505.00949",
"base_model:nvidia/Llama-3.1-Minitron-4B-Width-Base",
"base_model:quantized:nvidia/Llama-3.1-Minitron-4B-Width-Base",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text-generation | 2025-05-21T21:58:58Z | ---
library_name: transformers
license: other
license_name: nvidia-open-model-license
license_link: >-
https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
pipeline_tag: text-generation
language:
- en
tags:
- nvidia
- llama-3
- pytorch
base_model:
- nvidia/Llama-3.1-Minitron-4B-Width-Base
datasets:
- nvidia/Llama-Nemotron-Post-Training-Dataset
---
# <span style="color: #7FFF7F;">Llama-3.1-Nemotron-Nano-4B-v1.1 GGUF Models</span>
## <span style="color: #7F7FFF;">Model Generation Details</span>
This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`92ecdcc0`](https://github.com/ggerganov/llama.cpp/commit/92ecdcc06a4c405a415bcaa0cb772bc560aa23b1).
## <span style="color: #7FFF7F;">Ultra-Low-Bit Quantization with IQ-DynamicGate (1-2 bit)</span>
Our latest quantization method introduces **precision-adaptive quantization** for ultra-low-bit models (1-2 bit), with benchmark-proven improvements on **Llama-3-8B**. This approach uses layer-specific strategies to preserve accuracy while maintaining extreme memory efficiency.
### **Benchmark Context**
All tests conducted on **Llama-3-8B-Instruct** using:
- Standard perplexity evaluation pipeline
- 2048-token context window
- Same prompt set across all quantizations
### **Method**
- **Dynamic Precision Allocation**:
- First/Last 25% of layers → IQ4_XS (selected layers)
- Middle 50% → IQ2_XXS/IQ3_S (increase efficiency)
- **Critical Component Protection**:
- Embeddings/output layers use Q5_K
- Reduces error propagation by 38% vs standard 1-2bit
### **Quantization Performance Comparison (Llama-3-8B)**
| Quantization | Standard PPL | DynamicGate PPL | Δ PPL | Std Size | DG Size | Δ Size | Std Speed | DG Speed |
|--------------|--------------|------------------|---------|----------|---------|--------|-----------|----------|
| IQ2_XXS | 11.30 | 9.84 | -12.9% | 2.5G | 2.6G | +0.1G | 234s | 246s |
| IQ2_XS | 11.72 | 11.63 | -0.8% | 2.7G | 2.8G | +0.1G | 242s | 246s |
| IQ2_S | 14.31 | 9.02 | -36.9% | 2.7G | 2.9G | +0.2G | 238s | 244s |
| IQ1_M | 27.46 | 15.41 | -43.9% | 2.2G | 2.5G | +0.3G | 206s | 212s |
| IQ1_S | 53.07 | 32.00 | -39.7% | 2.1G | 2.4G | +0.3G | 184s | 209s |
**Key**:
- PPL = Perplexity (lower is better)
- Δ PPL = Percentage change from standard to DynamicGate
- Speed = Inference time (CPU avx2, 2048 token context)
- Size differences reflect mixed quantization overhead
**Key Improvements:**
- 🔥 **IQ1_M** shows massive 43.9% perplexity reduction (27.46 → 15.41)
- 🚀 **IQ2_S** cuts perplexity by 36.9% while adding only 0.2GB
- ⚡ **IQ1_S** maintains 39.7% better accuracy despite 1-bit quantization
**Tradeoffs:**
- All variants have modest size increases (0.1-0.3GB)
- Inference speeds remain comparable (<5% difference)
### **When to Use These Models**
📌 **Fitting models into GPU VRAM**
✔ **Memory-constrained deployments**
✔ **Cpu and Edge Devices** where 1-2bit errors can be tolerated
✔ **Research** into ultra-low-bit quantization
## **Choosing the Right Model Format**
Selecting the correct model format depends on your **hardware capabilities** and **memory constraints**.
### **BF16 (Brain Float 16) – Use if BF16 acceleration is available**
- A 16-bit floating-point format designed for **faster computation** while retaining good precision.
- Provides **similar dynamic range** as FP32 but with **lower memory usage**.
- Recommended if your hardware supports **BF16 acceleration** (check your device's specs).
- Ideal for **high-performance inference** with **reduced memory footprint** compared to FP32.
📌 **Use BF16 if:**
✔ Your hardware has native **BF16 support** (e.g., newer GPUs, TPUs).
✔ You want **higher precision** while saving memory.
✔ You plan to **requantize** the model into another format.
📌 **Avoid BF16 if:**
❌ Your hardware does **not** support BF16 (it may fall back to FP32 and run slower).
❌ You need compatibility with older devices that lack BF16 optimization.
---
### **F16 (Float 16) – More widely supported than BF16**
- A 16-bit floating-point **high precision** but with less of range of values than BF16.
- Works on most devices with **FP16 acceleration support** (including many GPUs and some CPUs).
- Slightly lower numerical precision than BF16 but generally sufficient for inference.
📌 **Use F16 if:**
✔ Your hardware supports **FP16** but **not BF16**.
✔ You need a **balance between speed, memory usage, and accuracy**.
✔ You are running on a **GPU** or another device optimized for FP16 computations.
📌 **Avoid F16 if:**
❌ Your device lacks **native FP16 support** (it may run slower than expected).
❌ You have memory limitations.
---
### **Quantized Models (Q4_K, Q6_K, Q8, etc.) – For CPU & Low-VRAM Inference**
Quantization reduces model size and memory usage while maintaining as much accuracy as possible.
- **Lower-bit models (Q4_K)** → **Best for minimal memory usage**, may have lower precision.
- **Higher-bit models (Q6_K, Q8_0)** → **Better accuracy**, requires more memory.
📌 **Use Quantized Models if:**
✔ You are running inference on a **CPU** and need an optimized model.
✔ Your device has **low VRAM** and cannot load full-precision models.
✔ You want to reduce **memory footprint** while keeping reasonable accuracy.
📌 **Avoid Quantized Models if:**
❌ You need **maximum accuracy** (full-precision models are better for this).
❌ Your hardware has enough VRAM for higher-precision formats (BF16/F16).
---
### **Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)**
These models are optimized for **extreme memory efficiency**, making them ideal for **low-power devices** or **large-scale deployments** where memory is a critical constraint.
- **IQ3_XS**: Ultra-low-bit quantization (3-bit) with **extreme memory efficiency**.
- **Use case**: Best for **ultra-low-memory devices** where even Q4_K is too large.
- **Trade-off**: Lower accuracy compared to higher-bit quantizations.
- **IQ3_S**: Small block size for **maximum memory efficiency**.
- **Use case**: Best for **low-memory devices** where **IQ3_XS** is too aggressive.
- **IQ3_M**: Medium block size for better accuracy than **IQ3_S**.
- **Use case**: Suitable for **low-memory devices** where **IQ3_S** is too limiting.
- **Q4_K**: 4-bit quantization with **block-wise optimization** for better accuracy.
- **Use case**: Best for **low-memory devices** where **Q6_K** is too large.
- **Q4_0**: Pure 4-bit quantization, optimized for **ARM devices**.
- **Use case**: Best for **ARM-based devices** or **low-memory environments**.
---
### **Summary Table: Model Format Selection**
| Model Format | Precision | Memory Usage | Device Requirements | Best Use Case |
|--------------|------------|---------------|----------------------|---------------|
| **BF16** | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory |
| **F16** | High | High | FP16-supported devices | GPU inference when BF16 isn't available |
| **Q4_K** | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments |
| **Q6_K** | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized |
| **Q8_0** | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models |
| **IQ3_XS** | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy |
| **Q4_0** | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices |
---
## **Included Files & Details**
### `Llama-3.1-Nemotron-Nano-4B-v1.1-bf16.gguf`
- Model weights preserved in **BF16**.
- Use this if you want to **requantize** the model into a different format.
- Best if your device supports **BF16 acceleration**.
### `Llama-3.1-Nemotron-Nano-4B-v1.1-f16.gguf`
- Model weights stored in **F16**.
- Use if your device supports **FP16**, especially if BF16 is not available.
### `Llama-3.1-Nemotron-Nano-4B-v1.1-bf16-q8_0.gguf`
- **Output & embeddings** remain in **BF16**.
- All other layers quantized to **Q8_0**.
- Use if your device supports **BF16** and you want a quantized version.
### `Llama-3.1-Nemotron-Nano-4B-v1.1-f16-q8_0.gguf`
- **Output & embeddings** remain in **F16**.
- All other layers quantized to **Q8_0**.
### `Llama-3.1-Nemotron-Nano-4B-v1.1-q4_k.gguf`
- **Output & embeddings** quantized to **Q8_0**.
- All other layers quantized to **Q4_K**.
- Good for **CPU inference** with limited memory.
### `Llama-3.1-Nemotron-Nano-4B-v1.1-q4_k_s.gguf`
- Smallest **Q4_K** variant, using less memory at the cost of accuracy.
- Best for **very low-memory setups**.
### `Llama-3.1-Nemotron-Nano-4B-v1.1-q6_k.gguf`
- **Output & embeddings** quantized to **Q8_0**.
- All other layers quantized to **Q6_K** .
### `Llama-3.1-Nemotron-Nano-4B-v1.1-q8_0.gguf`
- Fully **Q8** quantized model for better accuracy.
- Requires **more memory** but offers higher precision.
### `Llama-3.1-Nemotron-Nano-4B-v1.1-iq3_xs.gguf`
- **IQ3_XS** quantization, optimized for **extreme memory efficiency**.
- Best for **ultra-low-memory devices**.
### `Llama-3.1-Nemotron-Nano-4B-v1.1-iq3_m.gguf`
- **IQ3_M** quantization, offering a **medium block size** for better accuracy.
- Suitable for **low-memory devices**.
### `Llama-3.1-Nemotron-Nano-4B-v1.1-q4_0.gguf`
- Pure **Q4_0** quantization, optimized for **ARM devices**.
- Best for **low-memory environments**.
- Prefer IQ4_NL for better accuracy.
# <span id="testllm" style="color: #7F7FFF;">🚀 If you find these models useful</span>
❤ **Please click "Like" if you find this useful!**
Help me test my **AI-Powered Network Monitor Assistant** with **quantum-ready security checks**:
👉 [Free Network Monitor](https://readyforquantum.com/dashboard/?assistant=open)
💬 **How to test**:
Choose an **AI assistant type**:
- `TurboLLM` (GPT-4o-mini)
- `HugLLM` (Hugginface Open-source)
- `TestLLM` (Experimental CPU-only)
### **What I’m Testing**
I’m pushing the limits of **small open-source models for AI network monitoring**, specifically:
- **Function calling** against live network services
- **How small can a model go** while still handling:
- Automated **Nmap scans**
- **Quantum-readiness checks**
- **Network Monitoring tasks**
🟡 **TestLLM** – Current experimental model (llama.cpp on 2 CPU threads):
- ✅ **Zero-configuration setup**
- ⏳ 30s load time (slow inference but **no API costs**)
- 🔧 **Help wanted!** If you’re into **edge-device AI**, let’s collaborate!
### **Other Assistants**
🟢 **TurboLLM** – Uses **gpt-4o-mini** for:
- **Create custom cmd processors to run .net code on Free Network Monitor Agents**
- **Real-time network diagnostics and monitoring**
- **Security Audits**
- **Penetration testing** (Nmap/Metasploit)
- 🔑 Get more tokens by logging in or [downloading our Free Network Monitor Agent with integrated AI Assistant](https://readyforquantum.com/download)
🔵 **HugLLM** – Latest Open-source models:
- 🌐 Runs on Hugging Face Inference API
### 💡 **Example commands to you could test**:
1. `"Give me info on my websites SSL certificate"`
2. `"Check if my server is using quantum safe encyption for communication"`
3. `"Run a comprehensive security audit on my server"`
4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a Free Network Monitor Agent to run the .net code from. This is a very flexible and powerful feature. Use with caution!
# Llama-3.1-Nemotron-Nano-4B-v1.1
## Model Overview

Llama-3.1-Nemotron-Nano-4B-v1.1 is a large language model (LLM) which is a derivative of [nvidia/Llama-3.1-Minitron-4B-Width-Base](https://huggingface.co/nvidia/Llama-3.1-Minitron-4B-Width-Base), which is created from Llama 3.1 8B using [our LLM compression technique](https://arxiv.org/abs/2408.11796) and offers improvements in model accuracy and efficiency. It is a reasoning model that is post trained for reasoning, human chat preferences, and tasks, such as RAG and tool calling.
Llama-3.1-Nemotron-Nano-4B-v1.1 is a model which offers a great tradeoff between model accuracy and efficiency. The model fits on a single RTX GPU and can be used locally. The model supports a context length of 128K.
This model underwent a multi-phase post-training process to enhance both its reasoning and non-reasoning capabilities. This includes a supervised fine-tuning stage for Math, Code, Reasoning, and Tool Calling as well as multiple reinforcement learning (RL) stages using Reward-aware Preference Optimization (RPO) algorithms for both chat and instruction-following. The final model checkpoint is obtained after merging the final SFT and RPO checkpoints
This model is part of the Llama Nemotron Collection. You can find the other model(s) in this family here:
- [Llama-3.3-Nemotron-Ultra-253B-v1](https://huggingface.co/nvidia/Llama-3_1-Nemotron-Ultra-253B-v1)
- [Llama-3.3-Nemotron-Super-49B-v1](https://huggingface.co/nvidia/Llama-3.3-Nemotron-Super-49B-v1)
- [Llama-3.1-Nemotron-Nano-8B-v1](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-8B-v1)
This model is ready for commercial use.
## License/Terms of Use
GOVERNING TERMS: Your use of this model is governed by the [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/). Additional Information: [Llama 3.1 Community License Agreement](https://www.llama.com/llama3_1/license/). Built with Llama.
**Model Developer:** NVIDIA
**Model Dates:** Trained between August 2024 and May 2025
**Data Freshness:** The pretraining data has a cutoff of June 2023.
## Use Case:
Developers designing AI Agent systems, chatbots, RAG systems, and other AI-powered applications. Also suitable for typical instruction-following tasks. Balance of model accuracy and compute efficiency (the model fits on a single RTX GPU and can be used locally).
## Release Date: <br>
5/20/2025 <br>
## References
- [\[2408.11796\] LLM Pruning and Distillation in Practice: The Minitron Approach](https://arxiv.org/abs/2408.11796)
- [\[2502.00203\] Reward-aware Preference Optimization: A Unified Mathematical Framework for Model Alignment](https://arxiv.org/abs/2502.00203)
- [\[2505.00949\] Llama-Nemotron: Efficient Reasoning Models](https://arxiv.org/abs/2505.00949)
## Model Architecture
**Architecture Type:** Dense decoder-only Transformer model
**Network Architecture:** Llama 3.1 Minitron Width 4B Base
## Intended use
Llama-3.1-Nemotron-Nano-4B-v1.1 is a general purpose reasoning and chat model intended to be used in English and coding languages. Other non-English languages (German, French, Italian, Portuguese, Hindi, Spanish, and Thai) are also supported.
# Input:
- **Input Type:** Text
- **Input Format:** String
- **Input Parameters:** One-Dimensional (1D)
- **Other Properties Related to Input:** Context length up to 131,072 tokens
## Output:
- **Output Type:** Text
- **Output Format:** String
- **Output Parameters:** One-Dimensional (1D)
- **Other Properties Related to Output:** Context length up to 131,072 tokens
## Model Version:
1.1 (5/20/2025)
## Software Integration
- **Runtime Engine:** NeMo 24.12 <br>
- **Recommended Hardware Microarchitecture Compatibility:**
- NVIDIA Hopper
- NVIDIA Ampere
## Quick Start and Usage Recommendations:
1. Reasoning mode (ON/OFF) is controlled via the system prompt, which must be set as shown in the example below. All instructions should be contained within the user prompt
2. We recommend setting temperature to `0.6`, and Top P to `0.95` for Reasoning ON mode
3. We recommend using greedy decoding for Reasoning OFF mode
4. We have provided a list of prompts to use for evaluation for each benchmark where a specific template is required
See the snippet below for usage with Hugging Face Transformers library. Reasoning mode (ON/OFF) is controlled via system prompt. Please see the example below.
Our code requires the transformers package version to be `4.44.2` or higher.
### Example of “Reasoning On:”
```python
import torch
import transformers
model_id = "nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1"
model_kwargs = {"torch_dtype": torch.bfloat16, "device_map": "auto"}
tokenizer = transformers.AutoTokenizer.from_pretrained(model_id)
tokenizer.pad_token_id = tokenizer.eos_token_id
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
tokenizer=tokenizer,
max_new_tokens=32768,
temperature=0.6,
top_p=0.95,
**model_kwargs
)
# Thinking can be "on" or "off"
thinking = "on"
print(pipeline([{"role": "system", "content": f"detailed thinking {thinking}"}, {"role": "user", "content": "Solve x*(sin(x)+2)=0"}]))
```
### Example of “Reasoning Off:”
```python
import torch
import transformers
model_id = "nvidia/Llama-3.1-Nemotron-Nano-4B-v1"
model_kwargs = {"torch_dtype": torch.bfloat16, "device_map": "auto"}
tokenizer = transformers.AutoTokenizer.from_pretrained(model_id)
tokenizer.pad_token_id = tokenizer.eos_token_id
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
tokenizer=tokenizer,
max_new_tokens=32768,
do_sample=False,
**model_kwargs
)
# Thinking can be "on" or "off"
thinking = "off"
print(pipeline([{"role": "system", "content": f"detailed thinking {thinking}"}, {"role": "user", "content": "Solve x*(sin(x)+2)=0"}]))
```
For some prompts, even though thinking is disabled, the model emergently prefers to think before responding. But if desired, the users can prevent it by pre-filling the assistant response.
```python
import torch
import transformers
model_id = "nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1"
model_kwargs = {"torch_dtype": torch.bfloat16, "device_map": "auto"}
tokenizer = transformers.AutoTokenizer.from_pretrained(model_id)
tokenizer.pad_token_id = tokenizer.eos_token_id
# Thinking can be "on" or "off"
thinking = "off"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
tokenizer=tokenizer,
max_new_tokens=32768,
do_sample=False,
**model_kwargs
)
print(pipeline([{"role": "system", "content": f"detailed thinking {thinking}"}, {"role": "user", "content": "Solve x*(sin(x)+2)=0"}, {"role":"assistant", "content":"<think>\n</think>"}]))
```
## Running a vLLM Server with Tool-call Support
Llama-3.1-Nemotron-Nano-4B-v1.1 supports tool calling. This HF repo hosts a tool-callilng parser as well as a chat template in Jinja, which can be used to launch a vLLM server.
Here is a shell script example to launch a vLLM server with tool-call support. `vllm/vllm-openai:v0.6.6` or newer should support the model.
```shell
#!/bin/bash
CWD=$(pwd)
PORT=5000
git clone https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1
docker run -it --rm \
--runtime=nvidia \
--gpus all \
--shm-size=16GB \
-p ${PORT}:${PORT} \
-v ${CWD}:${CWD} \
vllm/vllm-openai:v0.6.6 \
--model $CWD/Llama-3.1-Nemotron-Nano-4B-v1.1 \
--trust-remote-code \
--seed 1 \
--host "0.0.0.0" \
--port $PORT \
--served-model-name "Llama-Nemotron-Nano-4B-v1.1" \
--tensor-parallel-size 1 \
--max-model-len 131072 \
--gpu-memory-utilization 0.95 \
--enforce-eager \
--enable-auto-tool-choice \
--tool-parser-plugin "${CWD}/Llama-3.1-Nemotron-Nano-4B-v1.1/llama_nemotron_nano_toolcall_parser.py" \
--tool-call-parser "llama_nemotron_json" \
--chat-template "${CWD}/Llama-3.1-Nemotron-Nano-4B-v1.1/llama_nemotron_nano_generic_tool_calling.jinja"
```
Alternatively, you can use a virtual environment to launch a vLLM server like below.
```console
$ git clone https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1
$ conda create -n vllm python=3.12 -y
$ conda activate vllm
$ python -m vllm.entrypoints.openai.api_server \
--model Llama-3.1-Nemotron-Nano-4B-v1.1 \
--trust-remote-code \
--seed 1 \
--host "0.0.0.0" \
--port 5000 \
--served-model-name "Llama-Nemotron-Nano-4B-v1.1" \
--tensor-parallel-size 1 \
--max-model-len 131072 \
--gpu-memory-utilization 0.95 \
--enforce-eager \
--enable-auto-tool-choice \
--tool-parser-plugin "Llama-3.1-Nemotron-Nano-4B-v1.1/llama_nemotron_nano_toolcall_parser.py" \
--tool-call-parser "llama_nemotron_json" \
--chat-template "Llama-3.1-Nemotron-Nano-4B-v1.1/llama_nemotron_nano_generic_tool_calling.jinja"
```
After launching a vLLM server, you can call the server with tool-call support using a Python script like below.
```python
>>> from openai import OpenAI
>>> client = OpenAI(
base_url="http://0.0.0.0:5000/v1",
api_key="dummy",
)
>>> completion = client.chat.completions.create(
model="Llama-Nemotron-Nano-4B-v1.1",
messages=[
{"role": "system", "content": "detailed thinking on"},
{"role": "user", "content": "My bill is $100. What will be the amount for 18% tip?"},
],
tools=[
{"type": "function", "function": {"name": "calculate_tip", "parameters": {"type": "object", "properties": {"bill_total": {"type": "integer", "description": "The total amount of the bill"}, "tip_percentage": {"type": "integer", "description": "The percentage of tip to be applied"}}, "required": ["bill_total", "tip_percentage"]}}},
{"type": "function", "function": {"name": "convert_currency", "parameters": {"type": "object", "properties": {"amount": {"type": "integer", "description": "The amount to be converted"}, "from_currency": {"type": "string", "description": "The currency code to convert from"}, "to_currency": {"type": "string", "description": "The currency code to convert to"}}, "required": ["from_currency", "amount", "to_currency"]}}},
],
)
>>> completion.choices[0].message.content
'<think>\nOkay, let\'s see. The user has a bill of $100 and wants to know the amount of a 18% tip. So, I need to calculate the tip amount. The available tools include calculate_tip, which requires bill_total and tip_percentage. The parameters are both integers. The bill_total is 100, and the tip percentage is 18. So, the function should multiply 100 by 18% and return 18.0. But wait, maybe the user wants the total including the tip? The question says "the amount for 18% tip," which could be interpreted as the tip amount itself. Since the function is called calculate_tip, it\'s likely that it\'s designed to compute the tip, not the total. So, using calculate_tip with bill_total=100 and tip_percentage=18 should give the correct result. The other function, convert_currency, isn\'t relevant here. So, I should call calculate_tip with those values.\n</think>\n\n'
>>> completion.choices[0].message.tool_calls
[ChatCompletionMessageToolCall(id='chatcmpl-tool-2972d86817344edc9c1e0f9cd398e999', function=Function(arguments='{"bill_total": 100, "tip_percentage": 18}', name='calculate_tip'), type='function')]
```
## Inference:
**Engine:** Transformers
**Test Hardware:**
- BF16:
- 1x RTX 50 Series GPUs
- 1x RTX 40 Series GPUs
- 1x RTX 30 Series GPUs
- 1x H100-80GB GPU
- 1x A100-80GB GPU
**Preferred/Supported] Operating System(s):** Linux <br>
## Training Datasets
A large variety of training data was used for the post-training pipeline, including manually annotated data and synthetic data.
The data for the multi-stage post-training phases for improvements in Code, Math, and Reasoning is a compilation of SFT and RL data that supports improvements of math, code, general reasoning, and instruction following capabilities of the original Llama instruct model.
Prompts have been sourced from either public and open corpus or synthetically generated. Responses were synthetically generated by a variety of models, with some prompts containing responses for both Reasoning On and Off modes, to train the model to distinguish between two modes.
**Data Collection for Training Datasets:** <br>
* Hybrid: Automated, Human, Synthetic <br>
**Data Labeling for Training Datasets:** <br>
* N/A <br>
## Evaluation Datasets
We used the datasets listed below to evaluate Llama-3.1-Nemotron-Nano-4B-v1.1.
**Data Collection for Evaluation Datasets:** Hybrid: Human/Synthetic
**Data Labeling for Evaluation Datasets:** Hybrid: Human/Synthetic/Automatic
## Evaluation Results
These results contain both “Reasoning On”, and “Reasoning Off”. We recommend using temperature=`0.6`, top_p=`0.95` for “Reasoning On” mode, and greedy decoding for “Reasoning Off” mode. All evaluations are done with 32k sequence length. We run the benchmarks up to 16 times and average the scores to be more accurate.
> NOTE: Where applicable, a Prompt Template will be provided. While completing benchmarks, please ensure that you are parsing for the correct output format as per the provided prompt in order to reproduce the benchmarks seen below.
### MT-Bench
| Reasoning Mode | Score |
|--------------|------------|
| Reasoning Off | 7.4 |
| Reasoning On | 8.0 |
### MATH500
| Reasoning Mode | pass@1 |
|--------------|------------|
| Reasoning Off | 71.8% |
| Reasoning On | 96.2% |
User Prompt Template:
```
"Below is a math question. I want you to reason through the steps and then give a final answer. Your final answer should be in \boxed{}.\nQuestion: {question}"
```
### AIME25
| Reasoning Mode | pass@1 |
|--------------|------------|
| Reasoning Off | 13.3% |
| Reasoning On | 46.3% |
User Prompt Template:
```
"Below is a math question. I want you to reason through the steps and then give a final answer. Your final answer should be in \boxed{}.\nQuestion: {question}"
```
### GPQA-D
| Reasoning Mode | pass@1 |
|--------------|------------|
| Reasoning Off | 33.8% |
| Reasoning On | 55.1% |
User Prompt Template:
```
"What is the correct answer to this question: {question}\nChoices:\nA. {option_A}\nB. {option_B}\nC. {option_C}\nD. {option_D}\nLet's think step by step, and put the final answer (should be a single letter A, B, C, or D) into a \boxed{}"
```
### IFEval
| Reasoning Mode | Strict:Prompt | Strict:Instruction |
|--------------|------------|------------|
| Reasoning Off | 70.1% | 78.5% |
| Reasoning On | 75.5% | 82.6% |
### BFCL v2 Live
| Reasoning Mode | Score |
|--------------|------------|
| Reasoning Off | 63.6% |
| Reasoning On | 67.9% |
User Prompt Template:
```
<AVAILABLE_TOOLS>{functions}</AVAILABLE_TOOLS>
{user_prompt}
```
### MBPP 0-shot
| Reasoning Mode | pass@1 |
|--------------|------------|
| Reasoning Off | 61.9% |
| Reasoning On | 85.8% |
User Prompt Template:
````
You are an exceptionally intelligent coding assistant that consistently delivers accurate and reliable responses to user instructions.
@@ Instruction
Here is the given problem and test examples:
{prompt}
Please use the python programming language to solve this problem.
Please make sure that your code includes the functions from the test samples and that the input and output formats of these functions match the test samples.
Please return all completed codes in one code block.
This code block should be in the following format:
```python
# Your codes here
```
````
## Ethical Considerations:
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
For more detailed information on ethical considerations for this model, please see the Model Card++ [Explainability](explainability.md), [Bias](bias.md), [Safety & Security](safety.md), and [Privacy](privacy.md) Subcards.
Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/). |
DLYS/Qwen2.5-14b-MEDITUNE | DLYS | 2025-05-22T00:24:19Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-29T06:07:42Z | ---
library_name: transformers
tags: []
---
# Qwen2.5-14b-MEDITUNE
This model is a fine-tuned Qwen2.5-14b-instruct for Henrychur/MMedBench and Korean Medical QA dataset. Korean Medical QA dataset is [here](https://www.aihub.or.kr/aihubdata/data/list.do?currMenu=115&topMenu=100&&srchDataRealmCode=REALM006)
# Test
We tested this model on sean0042/KorMedMCQA.
# Result

* The rationale refers to the reasoning process generated by Qwen-2.5-72B, and it indicates the case where this rationale was used for training.
* The 5-response ensemble determines the final answer by voting among five responses and selecting the one with the most correct answers. |
vaishnavisatish/RAGmodel | vaishnavisatish | 2025-05-22T00:22:56Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-22T00:22:56Z | ---
license: apache-2.0
---
|
Wachiraya/speecht5_finetuned_th_en_2 | Wachiraya | 2025-05-22T00:21:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2025-05-22T00:20:35Z | ---
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
model-index:
- name: speecht5_finetuned_th_en_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_th_en_2
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
Samanthaz14/Lana | Samanthaz14 | 2025-05-22T00:20:53Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-05-21T23:40:50Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
tarro-ml/wav2vec2-base-wonders-phonemes | tarro-ml | 2025-05-22T00:19:36Z | 317 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-05-05T18:48:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
taoranl2/qwen25-coder-32b-hazard_epoch_2 | taoranl2 | 2025-05-22T00:19:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-Coder-32B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-Coder-32B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-22T00:00:29Z | ---
base_model: unsloth/Qwen2.5-Coder-32B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** taoranl2
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-Coder-32B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/AV-MI-Qwen2.5-3B-PT-BR-Instruct-GGUF | mradermacher | 2025-05-22T00:19:16Z | 41 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"pt",
"base_model:amadeusai/Amadeus-Verbo-MI-Qwen-2.5-3B-PT-BR-Instruct-Experimental",
"base_model:quantized:amadeusai/Amadeus-Verbo-MI-Qwen-2.5-3B-PT-BR-Instruct-Experimental",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-28T00:09:15Z | ---
base_model: amadeusai/Amadeus-Verbo-MI-Qwen-2.5-3B-PT-BR-Instruct-Experimental
language:
- pt
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/amadeusai/Amadeus-Verbo-MI-Qwen-2.5-3B-PT-BR-Instruct-Experimental
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/AV-MI-Qwen2.5-3B-PT-BR-Instruct-GGUF/resolve/main/AV-MI-Qwen2.5-3B-PT-BR-Instruct.Q2_K.gguf) | Q2_K | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/AV-MI-Qwen2.5-3B-PT-BR-Instruct-GGUF/resolve/main/AV-MI-Qwen2.5-3B-PT-BR-Instruct.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/AV-MI-Qwen2.5-3B-PT-BR-Instruct-GGUF/resolve/main/AV-MI-Qwen2.5-3B-PT-BR-Instruct.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/AV-MI-Qwen2.5-3B-PT-BR-Instruct-GGUF/resolve/main/AV-MI-Qwen2.5-3B-PT-BR-Instruct.Q3_K_L.gguf) | Q3_K_L | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/AV-MI-Qwen2.5-3B-PT-BR-Instruct-GGUF/resolve/main/AV-MI-Qwen2.5-3B-PT-BR-Instruct.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/AV-MI-Qwen2.5-3B-PT-BR-Instruct-GGUF/resolve/main/AV-MI-Qwen2.5-3B-PT-BR-Instruct.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AV-MI-Qwen2.5-3B-PT-BR-Instruct-GGUF/resolve/main/AV-MI-Qwen2.5-3B-PT-BR-Instruct.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AV-MI-Qwen2.5-3B-PT-BR-Instruct-GGUF/resolve/main/AV-MI-Qwen2.5-3B-PT-BR-Instruct.Q5_K_S.gguf) | Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/AV-MI-Qwen2.5-3B-PT-BR-Instruct-GGUF/resolve/main/AV-MI-Qwen2.5-3B-PT-BR-Instruct.Q5_K_M.gguf) | Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/AV-MI-Qwen2.5-3B-PT-BR-Instruct-GGUF/resolve/main/AV-MI-Qwen2.5-3B-PT-BR-Instruct.Q6_K.gguf) | Q6_K | 2.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/AV-MI-Qwen2.5-3B-PT-BR-Instruct-GGUF/resolve/main/AV-MI-Qwen2.5-3B-PT-BR-Instruct.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/AV-MI-Qwen2.5-3B-PT-BR-Instruct-GGUF/resolve/main/AV-MI-Qwen2.5-3B-PT-BR-Instruct.f16.gguf) | f16 | 6.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/SQL-R1-7B-i1-GGUF | mradermacher | 2025-05-22T00:18:56Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:MPX0222forHF/SQL-R1-7B",
"base_model:quantized:MPX0222forHF/SQL-R1-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-21T18:48:20Z | ---
base_model: MPX0222forHF/SQL-R1-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/MPX0222forHF/SQL-R1-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/SQL-R1-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SQL-R1-7B-i1-GGUF/resolve/main/SQL-R1-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/SQL-R1-7B-i1-GGUF/resolve/main/SQL-R1-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/SQL-R1-7B-i1-GGUF/resolve/main/SQL-R1-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/SQL-R1-7B-i1-GGUF/resolve/main/SQL-R1-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/SQL-R1-7B-i1-GGUF/resolve/main/SQL-R1-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/SQL-R1-7B-i1-GGUF/resolve/main/SQL-R1-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/SQL-R1-7B-i1-GGUF/resolve/main/SQL-R1-7B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/SQL-R1-7B-i1-GGUF/resolve/main/SQL-R1-7B.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/SQL-R1-7B-i1-GGUF/resolve/main/SQL-R1-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SQL-R1-7B-i1-GGUF/resolve/main/SQL-R1-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/SQL-R1-7B-i1-GGUF/resolve/main/SQL-R1-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/SQL-R1-7B-i1-GGUF/resolve/main/SQL-R1-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/SQL-R1-7B-i1-GGUF/resolve/main/SQL-R1-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/SQL-R1-7B-i1-GGUF/resolve/main/SQL-R1-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/SQL-R1-7B-i1-GGUF/resolve/main/SQL-R1-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/SQL-R1-7B-i1-GGUF/resolve/main/SQL-R1-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/SQL-R1-7B-i1-GGUF/resolve/main/SQL-R1-7B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/SQL-R1-7B-i1-GGUF/resolve/main/SQL-R1-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/SQL-R1-7B-i1-GGUF/resolve/main/SQL-R1-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/SQL-R1-7B-i1-GGUF/resolve/main/SQL-R1-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SQL-R1-7B-i1-GGUF/resolve/main/SQL-R1-7B.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/SQL-R1-7B-i1-GGUF/resolve/main/SQL-R1-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/SQL-R1-7B-i1-GGUF/resolve/main/SQL-R1-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/SQL-R1-7B-i1-GGUF/resolve/main/SQL-R1-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
yoonuspk17/stock_ai_predictor | yoonuspk17 | 2025-05-22T00:18:54Z | 0 | 0 | null | [
"finance",
"feature-extraction",
"base_model:foduucom/stockmarket-pattern-detection-yolov8",
"base_model:finetune:foduucom/stockmarket-pattern-detection-yolov8",
"license:mit",
"region:us"
] | feature-extraction | 2025-05-21T11:21:53Z | ---
license: mit
metrics:
- accuracy
base_model:
- foduucom/stockmarket-pattern-detection-yolov8
pipeline_tag: feature-extraction
tags:
- finance
--- |
mradermacher/AV-MI-Qwen2.5-14B-PT-BR-Instruct-GGUF | mradermacher | 2025-05-22T00:18:51Z | 53 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"pt",
"base_model:amadeusai/Amadeus-Verbo-MI-Qwen-2.5-14B-PT-BR-Instruct-Experimental",
"base_model:quantized:amadeusai/Amadeus-Verbo-MI-Qwen-2.5-14B-PT-BR-Instruct-Experimental",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-28T00:50:58Z | ---
base_model: amadeusai/Amadeus-Verbo-MI-Qwen-2.5-14B-PT-BR-Instruct-Experimental
language:
- pt
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/amadeusai/Amadeus-Verbo-MI-Qwen-2.5-14B-PT-BR-Instruct-Experimental
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/AV-MI-Qwen2.5-14B-PT-BR-Instruct-GGUF/resolve/main/AV-MI-Qwen2.5-14B-PT-BR-Instruct.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/AV-MI-Qwen2.5-14B-PT-BR-Instruct-GGUF/resolve/main/AV-MI-Qwen2.5-14B-PT-BR-Instruct.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/AV-MI-Qwen2.5-14B-PT-BR-Instruct-GGUF/resolve/main/AV-MI-Qwen2.5-14B-PT-BR-Instruct.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/AV-MI-Qwen2.5-14B-PT-BR-Instruct-GGUF/resolve/main/AV-MI-Qwen2.5-14B-PT-BR-Instruct.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/AV-MI-Qwen2.5-14B-PT-BR-Instruct-GGUF/resolve/main/AV-MI-Qwen2.5-14B-PT-BR-Instruct.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/AV-MI-Qwen2.5-14B-PT-BR-Instruct-GGUF/resolve/main/AV-MI-Qwen2.5-14B-PT-BR-Instruct.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AV-MI-Qwen2.5-14B-PT-BR-Instruct-GGUF/resolve/main/AV-MI-Qwen2.5-14B-PT-BR-Instruct.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AV-MI-Qwen2.5-14B-PT-BR-Instruct-GGUF/resolve/main/AV-MI-Qwen2.5-14B-PT-BR-Instruct.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/AV-MI-Qwen2.5-14B-PT-BR-Instruct-GGUF/resolve/main/AV-MI-Qwen2.5-14B-PT-BR-Instruct.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/AV-MI-Qwen2.5-14B-PT-BR-Instruct-GGUF/resolve/main/AV-MI-Qwen2.5-14B-PT-BR-Instruct.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/AV-MI-Qwen2.5-14B-PT-BR-Instruct-GGUF/resolve/main/AV-MI-Qwen2.5-14B-PT-BR-Instruct.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
linsanityuk/task-9-google-gemma-2-2b | linsanityuk | 2025-05-22T00:18:25Z | 68 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"custom_code",
"arxiv:1910.09700",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:adapter:microsoft/Phi-3.5-mini-instruct",
"region:us"
] | null | 2025-05-18T17:13:57Z | ---
base_model: microsoft/Phi-3.5-mini-instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
moondream/moondream-2b-2025-04-14-4bit | moondream | 2025-05-22T00:16:27Z | 456 | 4 | null | [
"safetensors",
"gguf",
"moondream1",
"image-text-to-text",
"custom_code",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-20T13:23:38Z | ---
license: apache-2.0
pipeline_tag: image-text-to-text
---
Moondream is a small vision language model designed to run efficiently everywhere.
[Website](https://moondream.ai/) / [Demo](https://moondream.ai/playground) / [GitHub](https://github.com/vikhyat/moondream)
This repository contains the 2025-04-14 **4-bit** release of Moondream. On an Nvidia RTX 3090, it uses 2,450 MB of VRAM and runs at a speed of 184 tokens/second. We used quantization-aware training techniques to build this version of the model, allowing us to achieve a 42% reduction in memory usage with only an 0.6% drop in accuracy.
There's more information about this version of the model in our [release blog post](https://moondream.ai/blog/smaller-faster-moondream-with-qat). Other revisions, as well as release history, can be found [here](https://huggingface.co/vikhyatk/moondream2).
### Usage
Make sure to install the requirements:
```
pip install pillow torchao
```
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from PIL import Image
model = AutoModelForCausalLM.from_pretrained(
"moondream/moondream-2b-2025-04-14-4bit",
trust_remote_code=True,
device_map={"": "cuda"}
)
# Optional, but recommended when running inference on a large number of
# images since it has upfront compilation cost but significantly speeds
# up inference:
model.model.compile()
# Captioning
print("Short caption:")
print(model.caption(image, length="short")["caption"])
print("\nNormal caption:")
for t in model.caption(image, length="normal", stream=True)["caption"]:
# Streaming generation example, supported for caption() and detect()
print(t, end="", flush=True)
print(model.caption(image, length="normal"))
# Visual Querying
print("\nVisual query: 'How many people are in the image?'")
print(model.query(image, "How many people are in the image?")["answer"])
# Object Detection
print("\nObject detection: 'face'")
objects = model.detect(image, "face")["objects"]
print(f"Found {len(objects)} face(s)")
# Pointing
print("\nPointing: 'person'")
points = model.point(image, "person")["points"]
print(f"Found {len(points)} person(s)")
``` |
phospho-app/Starkosaure-ACT-Follow_The_Line-1rt2z | phospho-app | 2025-05-22T00:14:13Z | 0 | 0 | null | [
"safetensors",
"phosphobot",
"act",
"region:us"
] | null | 2025-05-21T21:33:34Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [Starkosaure/Follow_The_Line](https://huggingface.co/datasets/Starkosaure/Follow_The_Line)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 60
- **Training steps**: 8000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
RichardErkhov/asimsultan_-_Llama-3.1-8B-Instruct-800-classification-gguf | RichardErkhov | 2025-05-22T00:14:08Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-21T21:07:26Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3.1-8B-Instruct-800-classification - GGUF
- Model creator: https://huggingface.co/asimsultan/
- Original model: https://huggingface.co/asimsultan/Llama-3.1-8B-Instruct-800-classification/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama-3.1-8B-Instruct-800-classification.Q2_K.gguf](https://huggingface.co/RichardErkhov/asimsultan_-_Llama-3.1-8B-Instruct-800-classification-gguf/blob/main/Llama-3.1-8B-Instruct-800-classification.Q2_K.gguf) | Q2_K | 2.96GB |
| [Llama-3.1-8B-Instruct-800-classification.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/asimsultan_-_Llama-3.1-8B-Instruct-800-classification-gguf/blob/main/Llama-3.1-8B-Instruct-800-classification.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [Llama-3.1-8B-Instruct-800-classification.IQ3_S.gguf](https://huggingface.co/RichardErkhov/asimsultan_-_Llama-3.1-8B-Instruct-800-classification-gguf/blob/main/Llama-3.1-8B-Instruct-800-classification.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [Llama-3.1-8B-Instruct-800-classification.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/asimsultan_-_Llama-3.1-8B-Instruct-800-classification-gguf/blob/main/Llama-3.1-8B-Instruct-800-classification.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [Llama-3.1-8B-Instruct-800-classification.IQ3_M.gguf](https://huggingface.co/RichardErkhov/asimsultan_-_Llama-3.1-8B-Instruct-800-classification-gguf/blob/main/Llama-3.1-8B-Instruct-800-classification.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [Llama-3.1-8B-Instruct-800-classification.Q3_K.gguf](https://huggingface.co/RichardErkhov/asimsultan_-_Llama-3.1-8B-Instruct-800-classification-gguf/blob/main/Llama-3.1-8B-Instruct-800-classification.Q3_K.gguf) | Q3_K | 3.74GB |
| [Llama-3.1-8B-Instruct-800-classification.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/asimsultan_-_Llama-3.1-8B-Instruct-800-classification-gguf/blob/main/Llama-3.1-8B-Instruct-800-classification.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [Llama-3.1-8B-Instruct-800-classification.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/asimsultan_-_Llama-3.1-8B-Instruct-800-classification-gguf/blob/main/Llama-3.1-8B-Instruct-800-classification.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [Llama-3.1-8B-Instruct-800-classification.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/asimsultan_-_Llama-3.1-8B-Instruct-800-classification-gguf/blob/main/Llama-3.1-8B-Instruct-800-classification.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [Llama-3.1-8B-Instruct-800-classification.Q4_0.gguf](https://huggingface.co/RichardErkhov/asimsultan_-_Llama-3.1-8B-Instruct-800-classification-gguf/blob/main/Llama-3.1-8B-Instruct-800-classification.Q4_0.gguf) | Q4_0 | 4.34GB |
| [Llama-3.1-8B-Instruct-800-classification.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/asimsultan_-_Llama-3.1-8B-Instruct-800-classification-gguf/blob/main/Llama-3.1-8B-Instruct-800-classification.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [Llama-3.1-8B-Instruct-800-classification.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/asimsultan_-_Llama-3.1-8B-Instruct-800-classification-gguf/blob/main/Llama-3.1-8B-Instruct-800-classification.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [Llama-3.1-8B-Instruct-800-classification.Q4_K.gguf](https://huggingface.co/RichardErkhov/asimsultan_-_Llama-3.1-8B-Instruct-800-classification-gguf/blob/main/Llama-3.1-8B-Instruct-800-classification.Q4_K.gguf) | Q4_K | 4.58GB |
| [Llama-3.1-8B-Instruct-800-classification.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/asimsultan_-_Llama-3.1-8B-Instruct-800-classification-gguf/blob/main/Llama-3.1-8B-Instruct-800-classification.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [Llama-3.1-8B-Instruct-800-classification.Q4_1.gguf](https://huggingface.co/RichardErkhov/asimsultan_-_Llama-3.1-8B-Instruct-800-classification-gguf/blob/main/Llama-3.1-8B-Instruct-800-classification.Q4_1.gguf) | Q4_1 | 4.78GB |
| [Llama-3.1-8B-Instruct-800-classification.Q5_0.gguf](https://huggingface.co/RichardErkhov/asimsultan_-_Llama-3.1-8B-Instruct-800-classification-gguf/blob/main/Llama-3.1-8B-Instruct-800-classification.Q5_0.gguf) | Q5_0 | 5.21GB |
| [Llama-3.1-8B-Instruct-800-classification.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/asimsultan_-_Llama-3.1-8B-Instruct-800-classification-gguf/blob/main/Llama-3.1-8B-Instruct-800-classification.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [Llama-3.1-8B-Instruct-800-classification.Q5_K.gguf](https://huggingface.co/RichardErkhov/asimsultan_-_Llama-3.1-8B-Instruct-800-classification-gguf/blob/main/Llama-3.1-8B-Instruct-800-classification.Q5_K.gguf) | Q5_K | 5.34GB |
| [Llama-3.1-8B-Instruct-800-classification.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/asimsultan_-_Llama-3.1-8B-Instruct-800-classification-gguf/blob/main/Llama-3.1-8B-Instruct-800-classification.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [Llama-3.1-8B-Instruct-800-classification.Q5_1.gguf](https://huggingface.co/RichardErkhov/asimsultan_-_Llama-3.1-8B-Instruct-800-classification-gguf/blob/main/Llama-3.1-8B-Instruct-800-classification.Q5_1.gguf) | Q5_1 | 5.65GB |
| [Llama-3.1-8B-Instruct-800-classification.Q6_K.gguf](https://huggingface.co/RichardErkhov/asimsultan_-_Llama-3.1-8B-Instruct-800-classification-gguf/blob/main/Llama-3.1-8B-Instruct-800-classification.Q6_K.gguf) | Q6_K | 6.14GB |
| [Llama-3.1-8B-Instruct-800-classification.Q8_0.gguf](https://huggingface.co/RichardErkhov/asimsultan_-_Llama-3.1-8B-Instruct-800-classification-gguf/blob/main/Llama-3.1-8B-Instruct-800-classification.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tar112/bible-qa-model | tar112 | 2025-05-22T00:13:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-22T00:12:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/AV-FI-Qwen2.5-32B-PT-BR-Instruct-GGUF | mradermacher | 2025-05-22T00:12:50Z | 33 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"pt",
"base_model:amadeusai/Amadeus-Verbo-FI-Qwen2.5-32B-PT-BR-Instruct",
"base_model:quantized:amadeusai/Amadeus-Verbo-FI-Qwen2.5-32B-PT-BR-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-30T01:39:34Z | ---
base_model: amadeusai/Amadeus-Verbo-FI-Qwen2.5-32B-PT-BR-Instruct
language:
- pt
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/amadeusai/Amadeus-Verbo-FI-Qwen2.5-32B-PT-BR-Instruct
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/AV-FI-Qwen2.5-32B-PT-BR-Instruct-GGUF/resolve/main/AV-FI-Qwen2.5-32B-PT-BR-Instruct.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/AV-FI-Qwen2.5-32B-PT-BR-Instruct-GGUF/resolve/main/AV-FI-Qwen2.5-32B-PT-BR-Instruct.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/AV-FI-Qwen2.5-32B-PT-BR-Instruct-GGUF/resolve/main/AV-FI-Qwen2.5-32B-PT-BR-Instruct.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/AV-FI-Qwen2.5-32B-PT-BR-Instruct-GGUF/resolve/main/AV-FI-Qwen2.5-32B-PT-BR-Instruct.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/AV-FI-Qwen2.5-32B-PT-BR-Instruct-GGUF/resolve/main/AV-FI-Qwen2.5-32B-PT-BR-Instruct.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/AV-FI-Qwen2.5-32B-PT-BR-Instruct-GGUF/resolve/main/AV-FI-Qwen2.5-32B-PT-BR-Instruct.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AV-FI-Qwen2.5-32B-PT-BR-Instruct-GGUF/resolve/main/AV-FI-Qwen2.5-32B-PT-BR-Instruct.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AV-FI-Qwen2.5-32B-PT-BR-Instruct-GGUF/resolve/main/AV-FI-Qwen2.5-32B-PT-BR-Instruct.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/AV-FI-Qwen2.5-32B-PT-BR-Instruct-GGUF/resolve/main/AV-FI-Qwen2.5-32B-PT-BR-Instruct.Q5_K_M.gguf) | Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/AV-FI-Qwen2.5-32B-PT-BR-Instruct-GGUF/resolve/main/AV-FI-Qwen2.5-32B-PT-BR-Instruct.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/AV-FI-Qwen2.5-32B-PT-BR-Instruct-GGUF/resolve/main/AV-FI-Qwen2.5-32B-PT-BR-Instruct.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
salma-remyx/spacellava-1.5-7b | salma-remyx | 2025-05-22T00:11:15Z | 5 | 1 | transformers | [
"transformers",
"safetensors",
"llava",
"image-text-to-text",
"remyx",
"llava-1.5",
"spatial-reasoning",
"multimodal",
"vlm",
"vqasynth",
"robotics",
"embodied-ai",
"quantitative-spatial-reasoning",
"distance-estimation",
"visual-question-answering",
"conversational",
"en",
"dataset:remyxai/OpenSpaces_MC",
"base_model:liuhaotian/llava-v1.5-7b",
"base_model:finetune:liuhaotian/llava-v1.5-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-21T03:15:57Z | ---
library_name: transformers
tags:
- remyx
- llava-1.5
- spatial-reasoning
- multimodal
- vlm
- vqasynth
- robotics
- embodied-ai
- quantitative-spatial-reasoning
- distance-estimation
- visual-question-answering
license: apache-2.0
datasets:
- remyxai/OpenSpaces_MC
language:
- en
base_model:
- liuhaotian/llava-v1.5-7b
pipeline_tag: image-text-to-text
---

# SpaceLLaVA-7B
[](https://colab.research.google.com/drive/19H_hOL8gc1nFQKpDoioJR8nDWX1lsNZM?usp=sharing)
Run the colab above to analyze attention patterns for **SpaceLLaVA-7B** using this [TransformerLens notebook](https://github.com/TransformerLensOrg/TransformerLens/blob/main/demos/LLaVA.ipynb)
 |
eagle0504/openai-gsm8k-codealpaca-20k-enhanced-deepseek-r1-distill-qwen-1.5b | eagle0504 | 2025-05-22T00:05:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"question-answering",
"en",
"dataset:eagle0504/augmented_codealpaca-20k-using-together-ai-deepseek-v1",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | question-answering | 2025-05-21T23:58:44Z | ---
library_name: transformers
license: mit
datasets:
- eagle0504/augmented_codealpaca-20k-using-together-ai-deepseek-v1
language:
- en
new_version: eagle0504/finetuned-deepseek-r1-distill-qwen-1.5b-by-openai-gsm8k-enhanced-v2
pipeline_tag: question-answering
---
# Model Card for CodeAlpaca-20k Dataset Enhanced with Reasoning
This model is fine-tuned to answer questions based on the CodeAlpaca-20k dataset enhanced with reasoning provided from Deepseek R1.
Invoke notebook shared [here](https://colab.research.google.com/drive/1B_Fbz0w76QxHbo9zAOf_pyZKKNI0EJJ9?usp=sharing), a publicly available Colab notebook for tests.
---
## Model Details
### Model Description
This is a transformer-based question-answering model fine-tuned from `eagle0504/finetuned-deepseek-r1-distill-qwen-1.5b-by-openai-gsm8k-enhanced-v2`. It was trained on a dataset derived from the OpenAI GSM8K benchmark, enhanced with chain-of-thought reasoning to encourage intermediate logical steps. The dataset pairs math word problems with structured answers, using `<think>...</think>` and `<answer>...</answer>` tags.
- **Developed by:** Yiqiao Yin
- **Model type:** Causal Language Model (fine-tuned for Q&A with reasoning)
- **Language(s):** English
- **License:** MIT
- **Finetuned from model:** eagle0504/finetuned-deepseek-r1-distill-qwen-1.5b-by-openai-gsm8k-enhanced-v2
---
## Training Configuration
- 🖥️ **Hardware:** Trained on a RunPod instance with:
- 🔥 4 x A100 SXM
- 🧠 146 vCPU
- 🧮 1144 GB RAM
- 💽 20 GB disk per GPU
- 🐳 **Container Image:** `runpod/pytorch:2.1.0-py3.10-cuda11.8.0-devel-ubuntu22.04`
- ⏱️ **Total Training Time:** 2 hours
- 💸 **Cost:** ~$7.56/hour × 2 hours = **$14+ USD**
- ⚙️ **Zero Redundancy Optimization:** DeepSpeed Stage 1
- 🎯 **Precision:** FP16 mixed-precision training
---
## Performance
- **Mean token-level accuracy:** **98%**
- Evaluation based on in-training token match accuracy over the formatted `<think>...</think><answer>...</answer>` structure.
- Model demonstrates strong reasoning capability in multi-step arithmetic and logic problems.
---
## Inference Format
To generate accurate completions, prompt the model in the following structure:
```
<question>Question: If Sally has 3 apples and buys 2 more, how many does she have in total?</question>
```
Be aware that this token `</question>` will prompt the answer to start with `<think>` which is trained into the model based on training data.
The model will continue reasoning within `<think>...</think>` and provide a final answer inside `<answer>...</answer>`.
---
## Intended Use
This model is intended for educational and research purposes in:
- Chain-of-thought prompting
- Math reasoning and logical inference
- Question-answering with intermediate steps
---
## Limitations
- Trained on structured synthetic data — real-world generalization may vary
- Best performance achieved when following the exact inference format
- Does not support multilingual inputs
---
## Citation
If you use this model, please cite:
```
@misc{yin2024gsm8k,
author = {Yiqiao Yin},
title = {TBD},
year = 2025,
note = {TBD}
}
```
## Model Card Contact
Author: Yiqiao Yin
Connect with me on [LinkedIn](https://www.linkedin.com/in/yiqiaoyin/) |
Demonthos/Qwen2.5-VL-32B-Instruct-GGUF-merged | Demonthos | 2025-05-22T00:05:27Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-22T00:04:57Z | ---
license: apache-2.0
---
ggml-org/Qwen2.5-VL-32B-Instruct-GGUF, but with merged f16 files |
g-assismoraes/gemma-3-4b-it-fpi-alpha-1.0-var-tiebe | g-assismoraes | 2025-05-22T00:04:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-22T00:00:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
cdp57/MM_gemma12k2 | cdp57 | 2025-05-22T00:02:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-22T00:02:22Z | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** cdp57
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jruaechalar/firma_cuarentaydos | jruaechalar | 2025-05-22T00:01:43Z | 0 | 0 | diffusers | [
"diffusers",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2025-05-21T20:33:03Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Sarahpa/ft-openhermes-25-mistral-7b-irca-dpo-pairs | Sarahpa | 2025-05-22T00:00:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-21T23:59:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bowen118/Qwen2.5-7B-Instruct-sft | bowen118 | 2025-05-21T23:59:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-21T23:32:29Z | ---
base_model: Qwen/Qwen2.5-7B-Instruct
library_name: transformers
model_name: Qwen2.5-7B-Instruct-sft
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-7B-Instruct-sft
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="bowen118/Qwen2.5-7B-Instruct-sft", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bowen118-stanford-university/papertrace/runs/ereagfnl)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.0
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
caoxiaoxue/nunchaku0.3 | caoxiaoxue | 2025-05-21T23:58:06Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2025-05-21T11:09:02Z | ---
license: mit
---
torch2.7, python3.12, cuda12.8, 5070ti, then: del comfyui-nunchaku0.2, last: git clone https://github.com/mit-han-lab/ComfyUI-nunchaku.git |
CreitinGameplays/Mistral-Nemo-12B-R1-v0.2 | CreitinGameplays | 2025-05-21T23:54:17Z | 169 | 4 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:CreitinGameplays/Magpie-Reasoning-V2-250K-CoT-Deepseek-R1-Llama-70B-filtered-mistral",
"base_model:mistralai/Mistral-Nemo-Instruct-2407",
"base_model:finetune:mistralai/Mistral-Nemo-Instruct-2407",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-18T01:09:24Z | ---
license: mit
datasets:
- >-
CreitinGameplays/Magpie-Reasoning-V2-250K-CoT-Deepseek-R1-Llama-70B-filtered-mistral
language:
- en
base_model:
- mistralai/Mistral-Nemo-Instruct-2407
pipeline_tag: text-generation
library_name: transformers
---
## Mistral Nemo 12B R1

Took **12 hours** to finetune on **1x Nvidia H100** with the following settings:
- Batch size: 26
- Gradient accumulation steps: 1
- Epochs: 1
- Learning rate: 2e-5
- Warmup ratio: 0.1
Run the model:
```python
import torch
from transformers import pipeline
model_id = "CreitinGameplays/Mistral-Nemo-12B-R1-v0.2"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto"
)
messages = [
{"role": "user", "content": "How many r's are in strawberry?"}
]
outputs = pipe(
messages,
max_new_tokens=2048
)
print(outputs[0]["generated_text"][-1])
```
### Example response generated with this model:
- User prompt:
```
how many r's are in the word strawberry
```
- Model output:
```
<think>
Okay, so the question is asking how many R's are in the word "strawberry." Hmm, let me think about this step by step.
First, I need to write down the word to make it easier to count. So, "strawberry" is the word. Now, I should go through each letter one by one to check for the letter R.
Starting from the beginning, the first letter is S. Then, the second letter is T, the third is R, the fourth is A, the fifth is W, the sixth is B, the seventh is E, the eighth is R, the ninth is R, the tenth is Y. Wait, so I've counted up to the tenth letter, which is Y.
Now, I need to count how many of those are R's. Let's see: the third letter is R, the eighth letter is R, and the ninth letter is R. So that's three R's in total.
Wait, hold on, let me double-check to make sure I didn't miss any. The word is "strawberry." S, T, R, A, W, B, E, R, R, Y. Yeah, that's three R's. So, the answer should be three.
I think that's it. I don't see any other R's in the word. So, the number of R's in "strawberry" is three.
**Final Answer**
The word "strawberry" contains three R's.
</think>
The word "strawberry" contains three R's.
**Answer:** Three R's.</s>
```
Example 2:
- User prompt:
```
which number has bigger value, 9.9 or 9.11? think step by step.
```
- Model output:
```
<think>
Okay, so I have this question here: which number is bigger, 9.9 or 9.11? Hmm, let me think about this step by step. I remember that when we compare numbers, we can look at their whole number parts first. So, both numbers start with 9. That means they are equal in terms of the whole number part, right? So, they are both 9.
Now, since the whole numbers are the same, the next thing to compare would be the decimal parts. The decimal part of 9.9 is 0.9, and for 9.11, it's 0.11. I need to figure out which of these decimal parts is larger.
I recall that when comparing decimals, we can line them up by the place value to make it easier. So, let's write them out:
9.9 is equal to 9.900000...
And 9.11 is equal to 9.110000...
Now, if I line them up by the tenths place, it looks like this:
9.900000...
9.110000...
Looking at the tenths place, the first number has 9 and the second has 1. Since 9 is larger than 1, the first number is bigger. Therefore, 9.9 is larger than 9.11.
Wait, but let me double-check to make sure I didn't make a mistake. Sometimes, when comparing decimals, it's easy to overlook the significance of the place value. So, let me consider the hundredths place as well. In 9.9, the hundredths place is 9, and in 9.11, it's 1. So, again, 9 is larger than 1, which confirms that 9.9 is bigger.
Is there another way to think about this? Maybe by converting them to fractions or something? Let me try that. 9.9 is equal to 9 and 9/10, and 9.11 is equal to 9 and 11/100. So, 9 and 9/10 is larger than 9 and 11/100 because 9/10 is larger than 11/100.
Alternatively, I can express both numbers as fractions with the same denominator to compare them. 9.9 is 99/10, and 9.11 is 911/100. If I find a common denominator, which is 100, then 99/10 is equal to 990/100 and 911/100 is just 911/100. Comparing 990 and 911, 990 is definitely larger than 911, so 9.9 is bigger.
Another way is to think about how many times 0.1 is in each number. 9.9 is 9 and a half, which is 9.5, and 9.11 is 9 and a bit more than half, which is less than 9.5. So, 9.9 is larger.
Wait, but maybe I should consider the context of the question. Is this a math problem or something else? If it's a math problem, then the answer is straightforward. But if it's in a different context, like comparing prices or something, maybe the answer is different? Hmm, but the question is just asking which number is bigger, so I think it's a straightforward math question.
Let me also think about the place value in a different way. The number 9.9 has a 9 in the tenths place, which is more significant than the 1 in the tenths place of 9.11. So, that means 9.9 is larger.
Is there a way to visualize this? Maybe by using a number line. On the number line, 9.9 is to the right of 9.11, so it's larger. That makes sense.
Another thought: if I add 0.1 to both numbers, 9.9 becomes 10, and 9.11 becomes 9.21. Clearly, 10 is larger than 9.21, so 9.9 is bigger. That seems to confirm my earlier conclusion.
Wait, let me try adding 0.01 to both numbers. 9.9 becomes 9.91, and 9.11 becomes 9.12. Now, 9.91 is still larger than 9.12. So, that also shows that 9.9 is bigger.
I think I've considered this from multiple angles, and each time, 9.9 comes out as the larger number. So, I'm confident that 9.9 is bigger than 9.11.
**Final Answer**
The number with the larger value is \boxed{9.9}.
</think>
To determine which number is larger, 9.9 or 9.11, we can follow these steps:
1. **Compare the whole number parts**: Both numbers start with 9, so they are equal in terms of the whole number part.
2. **Compare the decimal parts**: The decimal part of 9.9 is 0.9, and the decimal part of 9.11 is 0.11. We need to compare these two decimals.
3. **Line up the decimals by place value**:
- 9.9 is equal to 9.900000...
- 9.11 is equal to 9.110000...
4. **Compare the tenths place**: The first number has 9 and the second has 1. Since 9 is larger than 1, the first number is bigger.
5. **Compare the hundredths place**: The first number has 9 and the second has 1. Again, 9 is larger than 1, confirming that the first number is bigger.
6. **Convert to fractions for verification**:
- 9.9 is equal to 99/10.
- 9.11 is equal to 911/100.
- Comparing 99/10 and 911/100, we find that 99/10 is larger than 911/100.
7. **Consider the number line**: On the number line, 9.9 is to the right of 9.11, indicating that it is larger.
8. **Add small numbers to both**:
- Adding 0.1 to both: 9.9 becomes 10, and 9.11 becomes 9.21. Clearly, 10 is larger.
- Adding 0.01 to both: 9.9 becomes 9.91, and 9.11 becomes 9.12. Again, 9.91 is larger.
All methods lead to the conclusion that 9.9 is larger than 9.11.
\[
\boxed{9.9}
\]</s>
```
**Note**: The model was mainly finetuned on English dataset, meaning the model might not perform well in other languages; The model _may_ lose context/ignore your last turn, depending on your last prompt. |
ema1234/qwen_qlora_ntp_merged_new | ema1234 | 2025-05-21T23:53:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-21T23:53:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Kurosawama/Llama-2-7b-chat-DPO-beamsearch-align | Kurosawama | 2025-05-21T23:45:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"trl",
"dpo",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-21T23:45:13Z | ---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
john-at-reticular/simple_stories_nested_dropout | john-at-reticular | 2025-05-21T23:42:02Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-10T10:34:38Z | ---
license: apache-2.0
---
|
mlfoundations-dev/e1_science_longest_r1_1k | mlfoundations-dev | 2025-05-21T23:39:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-21T21:38:54Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: e1_science_longest_r1_1k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# e1_science_longest_r1_1k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/e1_science_longest_r1_1k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 24
- total_train_batch_size: 96
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
ntnu-smil/whisper-large-v3-turbo-sandi-train-1-rich-transcript-32 | ntnu-smil | 2025-05-21T23:39:35Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"wft",
"whisper",
"automatic-speech-recognition",
"audio",
"speech",
"generated_from_trainer",
"en",
"dataset:ntnu-smil/sandi2025-ds",
"base_model:openai/whisper-large-v3-turbo",
"base_model:adapter:openai/whisper-large-v3-turbo",
"license:mit",
"model-index",
"region:us"
] | automatic-speech-recognition | 2025-05-21T22:05:45Z | ---
library_name: peft
language:
- en
license: mit
base_model: openai/whisper-large-v3-turbo
tags:
- wft
- whisper
- automatic-speech-recognition
- audio
- speech
- generated_from_trainer
datasets:
- ntnu-smil/sandi2025-ds
metrics:
- wer
model-index:
- name: whisper-large-v3-turbo-sandi-train-1-rich-transcript-32
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: ntnu-smil/sandi2025-ds
type: ntnu-smil/sandi2025-ds
metrics:
- type: wer
value: 23.248425746388847
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-large-v3-turbo-sandi-train-1-rich-transcript-32
This model is a fine-tuned version of [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) on the ntnu-smil/sandi2025-ds dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7920
- Wer: 23.2484
- Cer: 16.5856
- Decode Runtime: 203.4514
- Wer Runtime: 0.1639
- Cer Runtime: 0.3157
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.98) and epsilon=1e-06 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 732
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer | Decode Runtime | Wer Runtime | Cer Runtime |
|:-------------:|:------:|:----:|:---------------:|:--------:|:-------:|:--------------:|:-----------:|:-----------:|
| 0.9653 | 0.1667 | 122 | 0.8334 | 103.3411 | 64.3972 | 245.8729 | 0.1967 | 0.3575 |
| 1.1313 | 1.1667 | 244 | 0.8073 | 53.0086 | 33.9467 | 210.6469 | 0.1851 | 0.3293 |
| 0.54 | 2.1667 | 366 | 0.7915 | 25.4142 | 18.3008 | 196.4910 | 0.1906 | 0.3139 |
| 0.3761 | 3.1667 | 488 | 0.7882 | 24.2463 | 17.3425 | 196.9004 | 0.1675 | 0.3169 |
| 0.8462 | 4.1667 | 610 | 0.7921 | 23.4051 | 16.7178 | 197.5723 | 0.1661 | 0.3141 |
| 0.9957 | 5.1667 | 732 | 0.7920 | 23.2484 | 16.5856 | 203.4514 | 0.1639 | 0.3157 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.2
- Pytorch 2.8.0.dev20250319+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1 |
fpadovani/fr_wiki_clm_13 | fpadovani | 2025-05-21T23:38:49Z | 9 | 1 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-07T07:45:17Z | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: fr_wiki_clm_13
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fr_wiki_clm_13
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4708
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 40000
- training_steps: 100000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:------:|:---------------:|
| No log | 1.7190 | 2000 | 7.1490 |
| 7.2211 | 3.4379 | 4000 | 5.8869 |
| 7.2211 | 5.1569 | 6000 | 5.4206 |
| 5.4603 | 6.8758 | 8000 | 5.0644 |
| 5.4603 | 8.5948 | 10000 | 4.7704 |
| 4.8205 | 10.3137 | 12000 | 4.5274 |
| 4.8205 | 12.0327 | 14000 | 4.3241 |
| 4.369 | 13.7516 | 16000 | 4.1509 |
| 4.369 | 15.4706 | 18000 | 4.0131 |
| 4.039 | 17.1895 | 20000 | 3.8967 |
| 4.039 | 18.9085 | 22000 | 3.7902 |
| 3.792 | 20.6274 | 24000 | 3.7083 |
| 3.792 | 22.3464 | 26000 | 3.6318 |
| 3.5988 | 24.0653 | 28000 | 3.5716 |
| 3.5988 | 25.7843 | 30000 | 3.5175 |
| 3.4366 | 27.5032 | 32000 | 3.4818 |
| 3.4366 | 29.2222 | 34000 | 3.4478 |
| 3.303 | 30.9411 | 36000 | 3.4144 |
| 3.303 | 32.6601 | 38000 | 3.3936 |
| 3.1808 | 34.3790 | 40000 | 3.3769 |
| 3.1808 | 36.0980 | 42000 | 3.3654 |
| 3.0681 | 37.8169 | 44000 | 3.3452 |
| 3.0681 | 39.5359 | 46000 | 3.3394 |
| 2.9562 | 41.2548 | 48000 | 3.3389 |
| 2.9562 | 42.9738 | 50000 | 3.3301 |
| 2.8647 | 44.6927 | 52000 | 3.3356 |
| 2.8647 | 46.4117 | 54000 | 3.3422 |
| 2.7848 | 48.1306 | 56000 | 3.3468 |
| 2.7848 | 49.8496 | 58000 | 3.3467 |
| 2.713 | 51.5685 | 60000 | 3.3575 |
| 2.713 | 53.2875 | 62000 | 3.3672 |
| 2.6546 | 55.0064 | 64000 | 3.3688 |
| 2.6546 | 56.7254 | 66000 | 3.3764 |
| 2.5942 | 58.4443 | 68000 | 3.3907 |
| 2.5942 | 60.1633 | 70000 | 3.3962 |
| 2.5486 | 61.8823 | 72000 | 3.4000 |
| 2.5486 | 63.6012 | 74000 | 3.4071 |
| 2.5009 | 65.3202 | 76000 | 3.4164 |
| 2.5009 | 67.0391 | 78000 | 3.4277 |
| 2.4625 | 68.7581 | 80000 | 3.4293 |
| 2.4625 | 70.4770 | 82000 | 3.4382 |
| 2.4249 | 72.1960 | 84000 | 3.4481 |
| 2.4249 | 73.9149 | 86000 | 3.4495 |
| 2.3918 | 75.6339 | 88000 | 3.4554 |
| 2.3918 | 77.3528 | 90000 | 3.4596 |
| 2.3637 | 79.0718 | 92000 | 3.4637 |
| 2.3637 | 80.7907 | 94000 | 3.4666 |
| 2.337 | 82.5097 | 96000 | 3.4702 |
| 2.337 | 84.2286 | 98000 | 3.4712 |
| 2.3163 | 85.9476 | 100000 | 3.4708 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.5.1+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
DaniloNeto/roco_llava_kd | DaniloNeto | 2025-05-21T23:38:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llava",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/llava-1.5-7b-hf-bnb-4bit",
"base_model:finetune:unsloth/llava-1.5-7b-hf-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-21T23:30:10Z | ---
base_model: unsloth/llava-1.5-7b-hf-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llava
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** DaniloNeto
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llava-1.5-7b-hf-bnb-4bit
This llava model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
phospho-app/zedlika-ACT-DiceFlip-33yg3s8518 | phospho-app | 2025-05-21T23:37:25Z | 0 | 0 | null | [
"safetensors",
"phosphobot",
"act",
"region:us"
] | null | 2025-05-21T21:45:59Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [zedlika/DiceFlip](https://huggingface.co/datasets/zedlika/DiceFlip)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 40
- **Training steps**: 8000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
sourav-rapid-innovation/QWEN2.5-VL-3B-Full_Retraining-7-X-3000_batch_size_64 | sourav-rapid-innovation | 2025-05-21T23:37:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-21T23:32:13Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
CoBaLD/distilbert-en-debug | CoBaLD | 2025-05-21T23:37:05Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"cobald_parser",
"feature-extraction",
"pytorch",
"cobald-parsing",
"custom_code",
"en",
"dataset:CoBaLD/enhanced-cobald",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:gpl-3.0",
"model-index",
"region:us"
] | null | 2025-05-21T20:40:16Z | ---
base_model: distilbert-base-uncased
datasets: CoBaLD/enhanced-cobald
language: en
library_name: transformers
license: gpl-3.0
metrics:
- accuracy
- f1
pipeline_tag: cobald-parsing
tags:
- pytorch
model-index:
- name: CoBaLD/distilbert-en-debug
results:
- task:
type: token-classification
dataset:
name: enhanced-cobald
type: CoBaLD/enhanced-cobald
split: validation
metrics:
- type: f1
value: 0.9101676266662463
name: Null F1
- type: f1
value: 0.5409095785066166
name: Lemma F1
- type: f1
value: 0.6799165470790125
name: Morphology F1
- type: accuracy
value: 0.7116831556366664
name: Ud Jaccard
- type: accuracy
value: 0.6521369596891695
name: Eud Jaccard
- type: f1
value: 0.4487314232159592
name: Miscs F1
- type: f1
value: 0.42196090630447897
name: Deepslot F1
- type: f1
value: 0.5291347527801121
name: Semclass F1
---
# Model Card for distilbert-en-debug
A transformer-based multihead parser for CoBaLD annotation.
This model parses a pre-tokenized CoNLL-U text and jointly labels each token with three tiers of tags:
* Grammatical tags (lemma, UPOS, XPOS, morphological features),
* Syntactic tags (basic and enhanced Universal Dependencies),
* Semantic tags (deep slot and semantic class).
## Model Sources
- **Repository:** https://github.com/CobaldAnnotation/CobaldParser
- **Paper:** https://dialogue-conf.org/wp-content/uploads/2025/04/BaiukIBaiukAPetrovaM.009.pdf
- **Demo:** [coming soon]
## Citation
```
@inproceedings{baiuk2025cobald,
title={CoBaLD Parser: Joint Morphosyntactic and Semantic Annotation},
author={Baiuk, Ilia and Baiuk, Alexandra and Petrova, Maria},
booktitle={Proceedings of the International Conference "Dialogue"},
volume={I},
year={2025}
}
``` |
MinaMila/llama_instbase_LoRa_GermanCredit_ep3_42 | MinaMila | 2025-05-21T23:36:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-21T23:36:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
warda-10/image-caption-app | warda-10 | 2025-05-21T23:33:38Z | 0 | 0 | keras | [
"keras",
"license:apache-2.0",
"region:us"
] | null | 2025-05-21T23:24:39Z | ---
license: apache-2.0
---
|
mrtlyc/aiboy | mrtlyc | 2025-05-21T23:32:14Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-21T23:32:14Z | ---
license: apache-2.0
---
|
99eren99/TrColBERT-Long | 99eren99 | 2025-05-21T23:30:21Z | 0 | 0 | PyLate | [
"PyLate",
"safetensors",
"modernbert",
"ColBERT",
"sentence-transformers",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"loss:Distillation",
"turkish",
"arxiv:1908.10084",
"region:us"
] | sentence-similarity | 2025-05-21T22:53:38Z | ---
tags:
- ColBERT
- PyLate
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- loss:Distillation
- turkish
pipeline_tag: sentence-similarity
library_name: PyLate
---
# PyLate
This is a [PyLate](https://github.com/lightonai/pylate) model trained. It maps sentences & paragraphs to sequences of 128-dimensional dense vectors and can be used for semantic textual similarity using the MaxSim operator.
## Model Details
### Model Description
- **Model Type:** PyLate model
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Document Length:** 8192 tokens
- **Query Length:** 32 tokens
- **Output Dimensionality:** 128 tokens
- **Similarity Function:** MaxSim
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [PyLate Documentation](https://lightonai.github.io/pylate/)
- **Repository:** [PyLate on GitHub](https://github.com/lightonai/pylate)
- **Hugging Face:** [PyLate models on Hugging Face](https://huggingface.co/models?library=PyLate)
### Full Model Architecture
```
ColBERT(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel
(1): Dense({'in_features': 768, 'out_features': 128, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
)
```
# Evaluation
nDCG and Recall scores of this model(out-of-domain predictions) and other multilingual late interaction retrieval models on [Tr-NanoBEIR](https://huggingface.co/datasets/99eren99/Tr-NanoBEIR).
<img src="https://huggingface.co/99eren99/TrColBERT-Long/resolve/main/assets/scores.png"
alt="drawing"/>
## Usage
First install required libraries (Flash Attention 2 supporting GPU is a must for consistency otherwise you need to mask query expansion tokens in the output layer manually):
```bash
pip install -U einops flash_attn
pip install -U pylate
```
Then normalize your text ---> lambda x: x.replace("İ", "i").replace("I", "ı").lower()
### Retrieval
PyLate provides a streamlined interface to index and retrieve documents using ColBERT models. The index leverages the Voyager HNSW index to efficiently handle document embeddings and enable fast retrieval.
#### Indexing documents
First, load the ColBERT model and initialize the Voyager index, then encode and index your documents:
```python
from pylate import indexes, models, retrieve
# Step 1: Load the ColBERT model
document_length = 8192 #[1,8192] for truncating documents
model = models.ColBERT(
model_name_or_path="99eren99/TrColBERT-Long",document_length=document_length
)
try:
model.tokenizer.model_input_names.remove("token_type_ids")
except:
pass
model.eval()
model.to("cuda")
# Step 2: Initialize the Voyager index
index = indexes.Voyager(
index_folder="pylate-index",
index_name="index",
override=True, # This overwrites the existing index if any
)
# Step 3: Encode the documents
documents_ids = ["1", "2", "3"]
documents = ["document 1 text", "document 2 text", "document 3 text"]
documents_embeddings = model.encode(
documents,
batch_size=32,
is_query=False, # Ensure that it is set to False to indicate that these are documents, not queries
show_progress_bar=True,
)
# Step 4: Add document embeddings to the index by providing embeddings and corresponding ids
index.add_documents(
documents_ids=documents_ids,
documents_embeddings=documents_embeddings,
)
```
Note that you do not have to recreate the index and encode the documents every time. Once you have created an index and added the documents, you can re-use the index later by loading it:
```python
# To load an index, simply instantiate it with the correct folder/name and without overriding it
index = indexes.Voyager(
index_folder="pylate-index",
index_name="index",
)
```
#### Retrieving top-k documents for queries
Once the documents are indexed, you can retrieve the top-k most relevant documents for a given set of queries.
To do so, initialize the ColBERT retriever with the index you want to search in, encode the queries and then retrieve the top-k documents to get the top matches ids and relevance scores:
```python
# Step 1: Initialize the ColBERT retriever
retriever = retrieve.ColBERT(index=index)
# Step 2: Encode the queries
queries_embeddings = model.encode(
["query for document 3", "query for document 1"],
batch_size=32,
is_query=True, # Ensure that it is set to True to indicate that these are queries
show_progress_bar=True,
)
# Step 3: Retrieve top-k documents
scores = retriever.retrieve(
queries_embeddings=queries_embeddings,
k=10, # Retrieve the top 10 matches for each query
)
```
### Reranking
If you only want to use the ColBERT model to perform reranking on top of your first-stage retrieval pipeline without building an index, you can simply use rank function and pass the queries and documents to rerank:
```python
from pylate import rank, models
queries = [
"query A",
"query B",
]
documents = [
["document A", "document B"],
["document 1", "document C", "document B"],
]
documents_ids = [
[1, 2],
[1, 3, 2],
]
model = models.ColBERT(
model_name_or_path=pylate_model_id,
)
queries_embeddings = model.encode(
queries,
is_query=True,
)
documents_embeddings = model.encode(
documents,
is_query=False,
)
reranked_documents = rank.rerank(
documents_ids=documents_ids,
queries_embeddings=queries_embeddings,
documents_embeddings=documents_embeddings,
)
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
### Framework Versions
- Python: 3.10.16
- Sentence Transformers: 4.0.2
- PyLate: 1.1.7
- Transformers: 4.48.2
- PyTorch: 2.5.1+cu124
- Accelerate: 1.2.1
- Datasets: 2.21.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084"
}
```
#### PyLate
```bibtex
@misc{PyLate,
title={PyLate: Flexible Training and Retrieval for Late Interaction Models},
author={Chaffin, Antoine and Sourty, Raphaël},
url={https://github.com/lightonai/pylate},
year={2024}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
MinaMila/llama_instbase_LoRa_GermanCredit_ep2_42 | MinaMila | 2025-05-21T23:30:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-21T23:30:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aw1605/cogbehav_sft_0 | aw1605 | 2025-05-21T23:28:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-21T23:27:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-GGUF | mradermacher | 2025-05-21T23:28:02Z | 316 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"pt",
"base_model:amadeusai/Amadeus-Verbo-BI-Qwen-2.5-7B-PT-BR-Instruct-Experimental",
"base_model:quantized:amadeusai/Amadeus-Verbo-BI-Qwen-2.5-7B-PT-BR-Instruct-Experimental",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-12T08:26:00Z | ---
base_model: amadeusai/Amadeus-Verbo-BI-Qwen-2.5-7B-PT-BR-Instruct-Experimental
language:
- pt
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/amadeusai/Amadeus-Verbo-BI-Qwen-2.5-7B-PT-BR-Instruct-Experimental
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-GGUF/resolve/main/AV-BI-Qwen2.5-7B-PT-BR-Instruct.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-GGUF/resolve/main/AV-BI-Qwen2.5-7B-PT-BR-Instruct.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-GGUF/resolve/main/AV-BI-Qwen2.5-7B-PT-BR-Instruct.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-GGUF/resolve/main/AV-BI-Qwen2.5-7B-PT-BR-Instruct.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-GGUF/resolve/main/AV-BI-Qwen2.5-7B-PT-BR-Instruct.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-GGUF/resolve/main/AV-BI-Qwen2.5-7B-PT-BR-Instruct.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-GGUF/resolve/main/AV-BI-Qwen2.5-7B-PT-BR-Instruct.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-GGUF/resolve/main/AV-BI-Qwen2.5-7B-PT-BR-Instruct.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-GGUF/resolve/main/AV-BI-Qwen2.5-7B-PT-BR-Instruct.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-GGUF/resolve/main/AV-BI-Qwen2.5-7B-PT-BR-Instruct.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-GGUF/resolve/main/AV-BI-Qwen2.5-7B-PT-BR-Instruct.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-GGUF/resolve/main/AV-BI-Qwen2.5-7B-PT-BR-Instruct.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-i1-GGUF | mradermacher | 2025-05-21T23:27:44Z | 358 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"pt",
"base_model:amadeusai/Amadeus-Verbo-BI-Qwen-2.5-7B-PT-BR-Instruct-Experimental",
"base_model:quantized:amadeusai/Amadeus-Verbo-BI-Qwen-2.5-7B-PT-BR-Instruct-Experimental",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-12T09:40:58Z | ---
base_model: amadeusai/Amadeus-Verbo-BI-Qwen-2.5-7B-PT-BR-Instruct-Experimental
language:
- pt
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/amadeusai/Amadeus-Verbo-BI-Qwen-2.5-7B-PT-BR-Instruct-Experimental
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-i1-GGUF/resolve/main/AV-BI-Qwen2.5-7B-PT-BR-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-i1-GGUF/resolve/main/AV-BI-Qwen2.5-7B-PT-BR-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-i1-GGUF/resolve/main/AV-BI-Qwen2.5-7B-PT-BR-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-i1-GGUF/resolve/main/AV-BI-Qwen2.5-7B-PT-BR-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-i1-GGUF/resolve/main/AV-BI-Qwen2.5-7B-PT-BR-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-i1-GGUF/resolve/main/AV-BI-Qwen2.5-7B-PT-BR-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-i1-GGUF/resolve/main/AV-BI-Qwen2.5-7B-PT-BR-Instruct.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-i1-GGUF/resolve/main/AV-BI-Qwen2.5-7B-PT-BR-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-i1-GGUF/resolve/main/AV-BI-Qwen2.5-7B-PT-BR-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-i1-GGUF/resolve/main/AV-BI-Qwen2.5-7B-PT-BR-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-i1-GGUF/resolve/main/AV-BI-Qwen2.5-7B-PT-BR-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-i1-GGUF/resolve/main/AV-BI-Qwen2.5-7B-PT-BR-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-i1-GGUF/resolve/main/AV-BI-Qwen2.5-7B-PT-BR-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-i1-GGUF/resolve/main/AV-BI-Qwen2.5-7B-PT-BR-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-i1-GGUF/resolve/main/AV-BI-Qwen2.5-7B-PT-BR-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-i1-GGUF/resolve/main/AV-BI-Qwen2.5-7B-PT-BR-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-i1-GGUF/resolve/main/AV-BI-Qwen2.5-7B-PT-BR-Instruct.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-i1-GGUF/resolve/main/AV-BI-Qwen2.5-7B-PT-BR-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-i1-GGUF/resolve/main/AV-BI-Qwen2.5-7B-PT-BR-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-i1-GGUF/resolve/main/AV-BI-Qwen2.5-7B-PT-BR-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-i1-GGUF/resolve/main/AV-BI-Qwen2.5-7B-PT-BR-Instruct.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-i1-GGUF/resolve/main/AV-BI-Qwen2.5-7B-PT-BR-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-i1-GGUF/resolve/main/AV-BI-Qwen2.5-7B-PT-BR-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/AV-BI-Qwen2.5-7B-PT-BR-Instruct-i1-GGUF/resolve/main/AV-BI-Qwen2.5-7B-PT-BR-Instruct.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mlfoundations-dev/e1_code_fasttext_r1_1k | mlfoundations-dev | 2025-05-21T23:27:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-21T20:17:21Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: e1_code_fasttext_r1_1k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# e1_code_fasttext_r1_1k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/e1_code_fasttext_r1_1k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 24
- total_train_batch_size: 96
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
taoranl2/qwen25-coder-32b-hazard_epoch_1 | taoranl2 | 2025-05-21T23:27:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-Coder-32B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-Coder-32B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-21T23:17:12Z | ---
base_model: unsloth/Qwen2.5-Coder-32B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** taoranl2
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-Coder-32B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/General-Reasoner-14B-preview-GGUF | mradermacher | 2025-05-21T23:26:55Z | 288 | 2 | transformers | [
"transformers",
"gguf",
"en",
"dataset:TIGER-Lab/VisualWebInstruct-Verified",
"base_model:TIGER-Lab/General-Reasoner-Qwen2.5-14B",
"base_model:quantized:TIGER-Lab/General-Reasoner-Qwen2.5-14B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-12T13:37:09Z | ---
base_model: TIGER-Lab/General-Reasoner-Qwen2.5-14B
datasets:
- TIGER-Lab/VisualWebInstruct-Verified
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/TIGER-Lab/General-Reasoner-Qwen2.5-14B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/General-Reasoner-14B-preview-GGUF/resolve/main/General-Reasoner-14B-preview.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/General-Reasoner-14B-preview-GGUF/resolve/main/General-Reasoner-14B-preview.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/General-Reasoner-14B-preview-GGUF/resolve/main/General-Reasoner-14B-preview.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/General-Reasoner-14B-preview-GGUF/resolve/main/General-Reasoner-14B-preview.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/General-Reasoner-14B-preview-GGUF/resolve/main/General-Reasoner-14B-preview.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/General-Reasoner-14B-preview-GGUF/resolve/main/General-Reasoner-14B-preview.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/General-Reasoner-14B-preview-GGUF/resolve/main/General-Reasoner-14B-preview.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/General-Reasoner-14B-preview-GGUF/resolve/main/General-Reasoner-14B-preview.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/General-Reasoner-14B-preview-GGUF/resolve/main/General-Reasoner-14B-preview.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/General-Reasoner-14B-preview-GGUF/resolve/main/General-Reasoner-14B-preview.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/General-Reasoner-14B-preview-GGUF/resolve/main/General-Reasoner-14B-preview.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
MinaMila/llama_instbase_LoRa_GermanCredit_ep1_42 | MinaMila | 2025-05-21T23:23:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-21T23:23:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lodestones/Chroma | lodestones | 2025-05-21T23:23:48Z | 0 | 637 | pytorch | [
"pytorch",
"text-to-image",
"image-generation",
"chroma",
"en",
"license:apache-2.0",
"region:us"
] | text-to-image | 2025-01-27T22:04:09Z | ---
language:
- en
license: apache-2.0
tags:
- text-to-image
- image-generation
- chroma
pipeline_tag: text-to-image
library_name: pytorch
---
# Chroma: Open-Source, Uncensored, and Built for the Community
Chroma is a **8.9B** parameter model based on **FLUX.1-schnell** (technical report coming soon!). It’s fully **Apache 2.0 licensed**, ensuring that **anyone** can use, modify, and build on top of it—no corporate gatekeeping.
The model is **still training right now**, and I’d love to hear your thoughts! Your input and feedback are really appreciated.
# What Chroma Aims to Do
* Training on a **5M dataset**, curated from **20M** samples including anime, furry, artistic stuff, and photos.
* **Fully uncensored**, reintroducing missing anatomical concepts.
* Built as a **reliable open-source option** for those who need it.
# See the Progress
* **Hugging Face Debug Repo:** [**https://huggingface.co/lodestones/chroma-debug-development-only**](https://huggingface.co/lodestones/chroma-debug-development-only)
* **Live AIM Training Logs:** [**https://training.lodestone-rock.com**](https://training.lodestone-rock.com)
* **Training code!:** [**https://github.com/lodestone-rock/flow**](https://github.com/lodestone-rock/flow)
* **CivitAi gallery:** [**https://civitai.com/posts/13766416**](https://civitai.com/posts/13766416)
* **CivitAi model:** [**https://civitai.com/models/1330309/chroma**](https://civitai.com/models/1330309/chroma)
# Special Thanks
Shoutout to Fictional.ai for the awesome support — seriously appreciate you helping push open-source AI forward.
You can try it over on their site:
[](https://fictional.ai/?ref=chroma_hf)
# Support Open-Source AI
The current pretraining run has already used **6000+ H100 hours**, and keeping this going long-term is expensive.
If you believe in **accessible, community-driven AI**, any support would be greatly appreciated.
👉 **[https://ko-fi.com/lodestonerock](https://ko-fi.com/lodestonerock) — Every bit helps!**
**ETH: 0x679C0C419E949d8f3515a255cE675A1c4D92A3d7**
my discord: [**discord.gg/SQVcWVbqKx**](http://discord.gg/SQVcWVbqKx)



## Table of Contents
- [Chroma: Open-Source, Uncensored, and Built for the Community](#chroma-open-source-uncensored-and-built-for-the-community)
- [How to run this model](#how-to-run-this-model)
- [ComfyUI](#comfyui)
- diffusers [WIP]
- brief tech report
- [Architectural modifications](#architectural-modifications)
- [12B → 8.9B](#12b-%E2%86%92-89b)
- [MMDiT masking](#mmdit-masking)
- [Timestep Distributions](#timestep-distributions)
- [Minibatch Optimal Transport](#minibatch-optimal-transport) [WIP]
- [Training Details]
- [T5 QAT training] [WIP]
- [Prior preserving distribution training] [WIP]
- [Scramming] [WIP]
- [blockwise droppout optimizers] [WIP]
- [Citation](#citation)
# How to run this model
## ComfyUI
### Requirements
- ComfyUI installation
- [Chroma checkpoint](https://huggingface.co/lodestones/Chroma) (pick the latest version on this repo)
- [Alternative option: FP8 Scaled Quant](https://huggingface.co/Clybius/Chroma-fp8-scaled) (Format used by ComfyUI with possible inference speed increase)
- [Alternative option: GGUF Quantized](https://huggingface.co/silveroxides/Chroma-GGUF) (You will need to install ComfyUI-GGUF custom node)
- [T5 XXL](https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/t5xxl_fp16.safetensors) or [T5 XXL fp8](https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/t5xxl_fp8_e4m3fn.safetensors) (either of them will work)
- [FLUX VAE](https://huggingface.co/lodestones/Chroma/resolve/main/ae.safetensors)
- [Chroma_Workflow](https://huggingface.co/lodestones/Chroma/resolve/main/ChromaSimpleWorkflow20250507.json)
### Deprecated: Manual Installation (Chroma)
1. Navigate to your ComfyUI's `ComfyUI/custom_nodes` folder
2. Clone the repository:
```bash
git clone https://github.com/lodestone-rock/ComfyUI_FluxMod.git
```
3. Restart ComfyUI
4. Refresh your browser if ComfyUI is already running
### How to run the model
1. put `T5_xxl` into `ComfyUI/models/clip` folder
2. put `FLUX VAE` into `ComfyUI/models/vae` folder
3. put `Chroma checkpoint` into `ComfyUI/models/diffusion_models` folder
4. load chroma workflow to your ComfyUI
5. Run the workflow
# Architectural Modifications
## 12B → 8.9B
### TL;DR: There are 3.3B parameters that only encode a single input vector, which I replaced with 250M params.
Since FLUX is so big, I had to modify the architecture and ensure minimal knowledge was lost in the process. The most obvious thing to prune was this modulation layer. In the diagram, it may look small, but in total, FLUX has 3.3B parameters allocated to it. Without glazing over the details too much, this layer's job is to let the model know which timestep it's at during the denoising process. This layer also receives information from pooled CLIP vectors.

But after a simple experiment of zeroing these pooled vectors out, the model’s output barely changed—which made pruning a breeze! Why? Because the only information left for this layer to encode is just a single number in the range of 0-1.
Yes, you heard it right—3.3B parameters were used to encode 8 bytes of float values. So this was the most obvious layer to prune and replace with a simple FFN. The whole replacement process only took a day on my single 3090, and after that, the model size was reduced to just 8.9B.
## MMDiT Masking
### TL;DR: Masking T5 padding tokens enhanced fidelity and increased stability during training.
It might not be obvious, but BFL had some oversight during pre-training where they forgot to mask both T5 and MMDiT tokens. So, for example, a short sentence like “a cat sat on a mat” actually looks like this in both T5 and MMDiT:
`<bos> a cat sat on a mat <pad><pad>...<pad><pad><pad>`

The model ends up paying way too much attention to padding tokens, drowning out the actual prompt information. The fix? Masking—so the model doesn’t associate anything with padding tokens.
But there’s a catch: if you mask out all padding tokens, the model falls out of distribution and generates a blurry mess. The solution? Unmask just one padding token while masking the rest.
With this fix, MMDiT now only needs to pay attention to:
`<bos> a cat sat on a mat <pad>`
## Timestep Distributions
### TL;DR: A custom timestep distribution prevents loss spikes during training.
When training a diffusion/flow model, we sample random timesteps—but not evenly. Why? Because empirically, training on certain timesteps more often makes the model converge faster.
FLUX uses a "lognorm" distribution, which prioritizes training around the middle timesteps. But this approach has a flaw: the tails—where high-noise and low-noise regions exist—are trained super sparsely.
If you train for a looong time (say, 1000 steps), the likelihood of hitting those tail regions is almost zero. The problem? When the model finally does see them, the loss spikes hard, throwing training out of whack—even with a huge batch size.
The fix is simple: sample and train those tail timesteps a bit more frequently using a `-x^2` function instead. You can see in the image that this makes the distribution thicker near 0 and 1, ensuring better coverage.

## Minibatch Optimal Transport
### TL;DR: Transport problem math magic :P
This one’s a bit math-heavy, but here’s the gist: FLUX isn’t actually "denoising" an image. What we’re really doing is training a vector field to map one distribution (noise) to another (image). Once the vector field is learned, we "flow" through it to transform noise into an image.
To keep it simple—just check out these two visuals:
[graph placeholder]
By choosing better pairing through math magic it accelerates training by reducing the “path ambiguity”
## Citation
```
@misc{rock2025chroma,
author = {Lodestone Rock},
title = {{Chroma: Open-Source, Uncensored, and Built for the Community}},
year = {2025},
note = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/lodestones/Chroma}},
}
``` |
mfstehling/bert-hate-speech-test | mfstehling | 2025-05-21T23:22:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-21T23:22:31Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: bert-hate-speech-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-hate-speech-test
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
99eren99/TrColBERT | 99eren99 | 2025-05-21T23:20:48Z | 4 | 0 | PyLate | [
"PyLate",
"safetensors",
"modernbert",
"ColBERT",
"sentence-transformers",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"loss:Distillation",
"turkish",
"arxiv:1908.10084",
"region:us"
] | sentence-similarity | 2025-04-07T11:45:47Z | ---
tags:
- ColBERT
- PyLate
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- loss:Distillation
- turkish
pipeline_tag: sentence-similarity
library_name: PyLate
---
# PyLate
This is a [PyLate](https://github.com/lightonai/pylate) model trained. It maps sentences & paragraphs to sequences of 128-dimensional dense vectors and can be used for semantic textual similarity using the MaxSim operator.
## Model Details
### Model Description
- **Model Type:** PyLate model
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Document Length:** 512 tokens
- **Query Length:** 32 tokens
- **Output Dimensionality:** 128 tokens
- **Similarity Function:** MaxSim
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [PyLate Documentation](https://lightonai.github.io/pylate/)
- **Repository:** [PyLate on GitHub](https://github.com/lightonai/pylate)
- **Hugging Face:** [PyLate models on Hugging Face](https://huggingface.co/models?library=PyLate)
### Full Model Architecture
```
ColBERT(
(0): Transformer({'max_seq_length': 511, 'do_lower_case': False}) with Transformer model: ModernBertModel
(1): Dense({'in_features': 768, 'out_features': 128, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
)
```
# Evaluation
nDCG and Recall scores of this model(out-of-domain predictions) and other multilingual late interaction retrieval models on [Tr-NanoBEIR](https://huggingface.co/datasets/99eren99/Tr-NanoBEIR). Test code and detailed metrics in ["./assets"](https://huggingface.co/99eren99/TrColBERT/tree/main/assets)
<img src="https://huggingface.co/99eren99/TrColBERT/resolve/main/assets/scores.jpg"
alt="drawing"/>
## Usage
First install required libraries (Flash Attention 2 supporting GPU is a must for consistency otherwise you need to mask query expansion tokens in the output layer manually):
```bash
pip install -U einops flash_attn
pip install -U pylate
```
Then normalize your text ---> lambda x: x.replace("İ", "i").replace("I", "ı").lower()
### Retrieval
PyLate provides a streamlined interface to index and retrieve documents using ColBERT models. The index leverages the Voyager HNSW index to efficiently handle document embeddings and enable fast retrieval.
#### Indexing documents
First, load the ColBERT model and initialize the Voyager index, then encode and index your documents:
```python
from pylate import indexes, models, retrieve
# Step 1: Load the ColBERT model
model = models.ColBERT(
model_name_or_path="99eren99/TrColBERT",
)
try:
model.tokenizer.model_input_names.remove("token_type_ids")
except:
pass
model.eval()
model.to("cuda")
# Step 2: Initialize the Voyager index
index = indexes.Voyager(
index_folder="pylate-index",
index_name="index",
override=True, # This overwrites the existing index if any
)
# Step 3: Encode the documents
documents_ids = ["1", "2", "3"]
documents = ["document 1 text", "document 2 text", "document 3 text"]
documents_embeddings = model.encode(
documents,
batch_size=32,
is_query=False, # Ensure that it is set to False to indicate that these are documents, not queries
show_progress_bar=True,
)
# Step 4: Add document embeddings to the index by providing embeddings and corresponding ids
index.add_documents(
documents_ids=documents_ids,
documents_embeddings=documents_embeddings,
)
```
Note that you do not have to recreate the index and encode the documents every time. Once you have created an index and added the documents, you can re-use the index later by loading it:
```python
# To load an index, simply instantiate it with the correct folder/name and without overriding it
index = indexes.Voyager(
index_folder="pylate-index",
index_name="index",
)
```
#### Retrieving top-k documents for queries
Once the documents are indexed, you can retrieve the top-k most relevant documents for a given set of queries.
To do so, initialize the ColBERT retriever with the index you want to search in, encode the queries and then retrieve the top-k documents to get the top matches ids and relevance scores:
```python
# Step 1: Initialize the ColBERT retriever
retriever = retrieve.ColBERT(index=index)
# Step 2: Encode the queries
queries_embeddings = model.encode(
["query for document 3", "query for document 1"],
batch_size=32,
is_query=True, # Ensure that it is set to True to indicate that these are queries
show_progress_bar=True,
)
# Step 3: Retrieve top-k documents
scores = retriever.retrieve(
queries_embeddings=queries_embeddings,
k=10, # Retrieve the top 10 matches for each query
)
```
### Reranking
If you only want to use the ColBERT model to perform reranking on top of your first-stage retrieval pipeline without building an index, you can simply use rank function and pass the queries and documents to rerank:
```python
from pylate import rank, models
queries = [
"query A",
"query B",
]
documents = [
["document A", "document B"],
["document 1", "document C", "document B"],
]
documents_ids = [
[1, 2],
[1, 3, 2],
]
model = models.ColBERT(
model_name_or_path=pylate_model_id,
)
queries_embeddings = model.encode(
queries,
is_query=True,
)
documents_embeddings = model.encode(
documents,
is_query=False,
)
reranked_documents = rank.rerank(
documents_ids=documents_ids,
queries_embeddings=queries_embeddings,
documents_embeddings=documents_embeddings,
)
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 32
- `gradient_accumulation_steps`: 4
- `learning_rate`: 8e-05
- `num_train_epochs`: 4
- `warmup_ratio`: 0.05
- `bf16`: True
- `tf32`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 4
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 8e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.05
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Framework Versions
- Python: 3.10.16
- Sentence Transformers: 4.0.2
- PyLate: 1.1.7
- Transformers: 4.48.2
- PyTorch: 2.5.1+cu124
- Accelerate: 1.2.1
- Datasets: 2.21.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084"
}
```
#### PyLate
```bibtex
@misc{PyLate,
title={PyLate: Flexible Training and Retrieval for Late Interaction Models},
author={Chaffin, Antoine and Sourty, Raphaël},
url={https://github.com/lightonai/pylate},
year={2024}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
Subsets and Splits