modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-13 01:05:21
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 423
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-13 01:03:53
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
zhouliang123/Llama-3.1-8B-bnb-4bit-daxueshengkcsj | zhouliang123 | "2025-03-27T08:58:54Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-03-27T08:07:29Z" | ---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** zhouliang123
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lesso02/29271be7-2e8d-4b56-97ea-69df91b9475d | lesso02 | "2025-04-01T01:15:51Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-01T00:47:48Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
Gembie/SignLang | Gembie | "2024-07-14T14:01:05Z" | 11 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-07-14T13:57:28Z" | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: SignLang
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9552238583564758
---
# SignLang
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### corgi

#### samoyed

#### shiba inu
 |
mbzuai-ugrip-statement-tuning/MDEBERTA_2e-06_16_0.1_0.01_10ep | mbzuai-ugrip-statement-tuning | "2024-06-14T07:10:55Z" | 164 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-14T07:10:17Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lesso10/41b68144-35a8-4c5c-a42e-182c5f0a8ea1 | lesso10 | "2025-04-07T09:25:37Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-07T09:10:14Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
RichardErkhov/saishf_-_Kuro-Lotus-10.7B-gguf | RichardErkhov | "2024-07-28T02:52:22Z" | 10 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | "2024-07-27T20:24:29Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Kuro-Lotus-10.7B - GGUF
- Model creator: https://huggingface.co/saishf/
- Original model: https://huggingface.co/saishf/Kuro-Lotus-10.7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Kuro-Lotus-10.7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/saishf_-_Kuro-Lotus-10.7B-gguf/blob/main/Kuro-Lotus-10.7B.Q2_K.gguf) | Q2_K | 3.73GB |
| [Kuro-Lotus-10.7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/saishf_-_Kuro-Lotus-10.7B-gguf/blob/main/Kuro-Lotus-10.7B.IQ3_XS.gguf) | IQ3_XS | 4.14GB |
| [Kuro-Lotus-10.7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/saishf_-_Kuro-Lotus-10.7B-gguf/blob/main/Kuro-Lotus-10.7B.IQ3_S.gguf) | IQ3_S | 4.37GB |
| [Kuro-Lotus-10.7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/saishf_-_Kuro-Lotus-10.7B-gguf/blob/main/Kuro-Lotus-10.7B.Q3_K_S.gguf) | Q3_K_S | 4.34GB |
| [Kuro-Lotus-10.7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/saishf_-_Kuro-Lotus-10.7B-gguf/blob/main/Kuro-Lotus-10.7B.IQ3_M.gguf) | IQ3_M | 4.51GB |
| [Kuro-Lotus-10.7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/saishf_-_Kuro-Lotus-10.7B-gguf/blob/main/Kuro-Lotus-10.7B.Q3_K.gguf) | Q3_K | 4.84GB |
| [Kuro-Lotus-10.7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/saishf_-_Kuro-Lotus-10.7B-gguf/blob/main/Kuro-Lotus-10.7B.Q3_K_M.gguf) | Q3_K_M | 4.84GB |
| [Kuro-Lotus-10.7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/saishf_-_Kuro-Lotus-10.7B-gguf/blob/main/Kuro-Lotus-10.7B.Q3_K_L.gguf) | Q3_K_L | 5.26GB |
| [Kuro-Lotus-10.7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/saishf_-_Kuro-Lotus-10.7B-gguf/blob/main/Kuro-Lotus-10.7B.IQ4_XS.gguf) | IQ4_XS | 5.43GB |
| [Kuro-Lotus-10.7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/saishf_-_Kuro-Lotus-10.7B-gguf/blob/main/Kuro-Lotus-10.7B.Q4_0.gguf) | Q4_0 | 5.66GB |
| [Kuro-Lotus-10.7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/saishf_-_Kuro-Lotus-10.7B-gguf/blob/main/Kuro-Lotus-10.7B.IQ4_NL.gguf) | IQ4_NL | 5.72GB |
| [Kuro-Lotus-10.7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/saishf_-_Kuro-Lotus-10.7B-gguf/blob/main/Kuro-Lotus-10.7B.Q4_K_S.gguf) | Q4_K_S | 5.7GB |
| [Kuro-Lotus-10.7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/saishf_-_Kuro-Lotus-10.7B-gguf/blob/main/Kuro-Lotus-10.7B.Q4_K.gguf) | Q4_K | 6.02GB |
| [Kuro-Lotus-10.7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/saishf_-_Kuro-Lotus-10.7B-gguf/blob/main/Kuro-Lotus-10.7B.Q4_K_M.gguf) | Q4_K_M | 6.02GB |
| [Kuro-Lotus-10.7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/saishf_-_Kuro-Lotus-10.7B-gguf/blob/main/Kuro-Lotus-10.7B.Q4_1.gguf) | Q4_1 | 6.27GB |
| [Kuro-Lotus-10.7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/saishf_-_Kuro-Lotus-10.7B-gguf/blob/main/Kuro-Lotus-10.7B.Q5_0.gguf) | Q5_0 | 6.89GB |
| [Kuro-Lotus-10.7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/saishf_-_Kuro-Lotus-10.7B-gguf/blob/main/Kuro-Lotus-10.7B.Q5_K_S.gguf) | Q5_K_S | 6.89GB |
| [Kuro-Lotus-10.7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/saishf_-_Kuro-Lotus-10.7B-gguf/blob/main/Kuro-Lotus-10.7B.Q5_K.gguf) | Q5_K | 7.08GB |
| [Kuro-Lotus-10.7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/saishf_-_Kuro-Lotus-10.7B-gguf/blob/main/Kuro-Lotus-10.7B.Q5_K_M.gguf) | Q5_K_M | 7.08GB |
| [Kuro-Lotus-10.7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/saishf_-_Kuro-Lotus-10.7B-gguf/blob/main/Kuro-Lotus-10.7B.Q5_1.gguf) | Q5_1 | 7.51GB |
| [Kuro-Lotus-10.7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/saishf_-_Kuro-Lotus-10.7B-gguf/blob/main/Kuro-Lotus-10.7B.Q6_K.gguf) | Q6_K | 8.2GB |
| [Kuro-Lotus-10.7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/saishf_-_Kuro-Lotus-10.7B-gguf/blob/main/Kuro-Lotus-10.7B.Q8_0.gguf) | Q8_0 | 10.62GB |
Original model description:
---
license: cc-by-nc-4.0
tags:
- mergekit
- merge
base_model:
- BlueNipples/SnowLotus-v2-10.7B
- Himitsui/KuroMitsu-11B
model-index:
- name: Kuro-Lotus-10.7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 68.69
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Kuro-Lotus-10.7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.51
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Kuro-Lotus-10.7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.64
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Kuro-Lotus-10.7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 58.27
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Kuro-Lotus-10.7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.21
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Kuro-Lotus-10.7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.11
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Kuro-Lotus-10.7B
name: Open LLM Leaderboard
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [BlueNipples/SnowLotus-v2-10.7B](https://huggingface.co/BlueNipples/SnowLotus-v2-10.7B)
* [Himitsui/KuroMitsu-11B](https://huggingface.co/Himitsui/KuroMitsu-11B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Himitsui/KuroMitsu-11B
layer_range: [0, 48]
- model: BlueNipples/SnowLotus-v2-10.7B
layer_range: [0, 48]
merge_method: slerp
base_model: Himitsui/KuroMitsu-11B
parameters:
t:
- filter: self_attn
value: [0.6, 0.7, 0.8, 0.9, 1]
- filter: mlp
value: [0.4, 0.3, 0.2, 0.1, 0]
- value: 0.5
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_saishf__Kuro-Lotus-10.7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |71.90|
|AI2 Reasoning Challenge (25-Shot)|68.69|
|HellaSwag (10-Shot) |87.51|
|MMLU (5-Shot) |66.64|
|TruthfulQA (0-shot) |58.27|
|Winogrande (5-shot) |84.21|
|GSM8k (5-shot) |66.11|
|
plate2105/plate | plate2105 | "2024-06-14T14:42:47Z" | 0 | 0 | null | [
"en",
"ja",
"es",
"af",
"license:apache-2.0",
"region:us"
] | null | "2024-06-14T14:29:38Z" | ---
license: apache-2.0
language:
- en
- ja
- es
- af
--- |
ReyesJa17/Chess100 | ReyesJa17 | "2024-01-13T07:16:10Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llava",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | null | "2024-01-13T04:51:17Z" | ---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
madisongrace99/Gen0 | madisongrace99 | "2023-11-09T19:04:24Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:reddit_tifu",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-11-03T16:37:44Z" | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- reddit_tifu
model-index:
- name: Gen0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Gen0
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the reddit_tifu dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
thaffggg/8a0b9068-e316-4c03-89fe-67df30cc8003 | thaffggg | "2025-01-11T21:20:48Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-3B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-3B-Instruct",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-11T21:02:42Z" | ---
library_name: peft
license: other
base_model: unsloth/Qwen2.5-3B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8a0b9068-e316-4c03-89fe-67df30cc8003
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-3B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9f2238306a2b62ed_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9f2238306a2b62ed_train_data.json
type:
field_instruction: question
field_output: context
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thaffggg/8a0b9068-e316-4c03-89fe-67df30cc8003
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/9f2238306a2b62ed_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 60218e23-6b01-401d-864f-bf96875d993a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 60218e23-6b01-401d-864f-bf96875d993a
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 8a0b9068-e316-4c03-89fe-67df30cc8003
This model is a fine-tuned version of [unsloth/Qwen2.5-3B-Instruct](https://huggingface.co/unsloth/Qwen2.5-3B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0741
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.9483 | 0.0946 | 200 | 2.0741 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ruselkomp/deep-pavlov-framebank-hidesize | ruselkomp | "2022-05-21T02:48:28Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | question-answering | "2022-05-20T23:58:23Z" | ---
tags:
- generated_from_trainer
model-index:
- name: deep-pavlov-framebank-hidesize
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deep-pavlov-framebank-hidesize
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0985
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0729 | 1.0 | 2827 | 1.0161 |
| 0.7899 | 2.0 | 5654 | 1.0360 |
| 0.5958 | 3.0 | 8481 | 1.0985 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.3.dev0
- Tokenizers 0.12.1
|
baxterstockman/my_awesome_eli5_clm-model | baxterstockman | "2023-08-21T20:57:46Z" | 202 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-08-21T18:13:45Z" | ---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7179
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8626 | 1.0 | 1145 | 3.7365 |
| 3.7894 | 2.0 | 2290 | 3.7213 |
| 3.7363 | 3.0 | 3435 | 3.7179 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cpu
- Datasets 2.14.4
- Tokenizers 0.13.3
|
mgarciav/dqn-SpaceInvadersNoFrameskip-v4 | mgarciav | "2023-03-26T14:05:45Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-26T14:04:59Z" | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 676.50 +/- 251.57
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mgarciav -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mgarciav -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mgarciav
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 50000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
DewiBrynJones/wav2vec2-xlsr-53-ft-btb-cy | DewiBrynJones | "2024-08-16T06:55:32Z" | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"DewiBrynJones/banc-trawsgrifiadau-bangor-clean",
"generated_from_trainer",
"base_model:facebook/wav2vec2-large-xlsr-53",
"base_model:finetune:facebook/wav2vec2-large-xlsr-53",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-05-10T06:22:14Z" | ---
license: apache-2.0
base_model: facebook/wav2vec2-large-xlsr-53
tags:
- automatic-speech-recognition
- DewiBrynJones/banc-trawsgrifiadau-bangor-clean
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-xlsr-53-ft-btb-cy
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-53-ft-btb-cy
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the DEWIBRYNJONES/BANC-TRAWSGRIFIADAU-BANGOR-CLEAN - DEFAULT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7008
- Wer: 0.4816
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 120
- training_steps: 1200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| No log | 0.4556 | 200 | 2.5809 | 0.9998 |
| No log | 0.9112 | 400 | 0.8879 | 0.6325 |
| 2.4327 | 1.3667 | 600 | 0.6609 | 0.4897 |
| 2.4327 | 1.8223 | 800 | 0.6689 | 0.5018 |
| 0.8178 | 2.2779 | 1000 | 0.6797 | 0.4852 |
| 0.8178 | 2.7335 | 1200 | 0.7008 | 0.4816 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
mradermacher/HazardNet-unsloth-reasoning-v0.3-GGUF | mradermacher | "2025-01-05T05:37:55Z" | 50 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2_vl",
"en",
"base_model:Tami3/HazardNet-unsloth-reasoning-v0.3",
"base_model:quantized:Tami3/HazardNet-unsloth-reasoning-v0.3",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-05T05:27:46Z" | ---
base_model: Tami3/HazardNet-unsloth-reasoning-v0.3
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_vl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Tami3/HazardNet-unsloth-reasoning-v0.3
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/HazardNet-unsloth-reasoning-v0.3-GGUF/resolve/main/HazardNet-unsloth-reasoning-v0.3.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/HazardNet-unsloth-reasoning-v0.3-GGUF/resolve/main/HazardNet-unsloth-reasoning-v0.3.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/HazardNet-unsloth-reasoning-v0.3-GGUF/resolve/main/HazardNet-unsloth-reasoning-v0.3.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/HazardNet-unsloth-reasoning-v0.3-GGUF/resolve/main/HazardNet-unsloth-reasoning-v0.3.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/HazardNet-unsloth-reasoning-v0.3-GGUF/resolve/main/HazardNet-unsloth-reasoning-v0.3.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/HazardNet-unsloth-reasoning-v0.3-GGUF/resolve/main/HazardNet-unsloth-reasoning-v0.3.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/HazardNet-unsloth-reasoning-v0.3-GGUF/resolve/main/HazardNet-unsloth-reasoning-v0.3.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/HazardNet-unsloth-reasoning-v0.3-GGUF/resolve/main/HazardNet-unsloth-reasoning-v0.3.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/HazardNet-unsloth-reasoning-v0.3-GGUF/resolve/main/HazardNet-unsloth-reasoning-v0.3.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/HazardNet-unsloth-reasoning-v0.3-GGUF/resolve/main/HazardNet-unsloth-reasoning-v0.3.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/HazardNet-unsloth-reasoning-v0.3-GGUF/resolve/main/HazardNet-unsloth-reasoning-v0.3.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/HazardNet-unsloth-reasoning-v0.3-GGUF/resolve/main/HazardNet-unsloth-reasoning-v0.3.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Holarissun/REP17X2_weightx2.0_zephyr3b_aisft_syn-tldr-gpt3_seq_alphaorig_beta1.0_epoch1-subset14000 | Holarissun | "2024-03-17T20:54:36Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:stabilityai/stablelm-zephyr-3b",
"base_model:adapter:stabilityai/stablelm-zephyr-3b",
"license:other",
"region:us"
] | null | "2024-03-17T20:54:32Z" | ---
license: other
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: stabilityai/stablelm-zephyr-3b
model-index:
- name: REP17X2_weightx2.0_zephyr3b_aisft_syn-tldr-gpt3_seq_alphaorig_beta1.0_epoch1-subset14000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# REP17X2_weightx2.0_zephyr3b_aisft_syn-tldr-gpt3_seq_alphaorig_beta1.0_epoch1-subset14000
This model is a fine-tuned version of [stabilityai/stablelm-zephyr-3b](https://huggingface.co/stabilityai/stablelm-zephyr-3b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
polymathic-ai/FNO-MHD_64 | polymathic-ai | "2025-03-28T12:25:24Z" | 0 | 0 | null | [
"safetensors",
"physics",
"dataset:polymathic-ai/MHD_64",
"arxiv:2010.08895",
"region:us"
] | null | "2025-03-28T12:24:46Z" | ---
datasets: polymathic-ai/MHD_64
tags:
- physics
---
# Benchmarking Models on the Well
[The Well](https://github.com/PolymathicAI/the_well) is a 15TB dataset collection of physics simulations. This model is part of the models that have been benchmarked on the Well.
The models have been trained for a fixed time of 12 hours or up to 500 epochs, whichever happens first. The training was performed on a NVIDIA H100 96GB GPU.
In the time dimension, the context length was set to 4. The batch size was set to maximize the memory usage. We experiment with 5 different learning rates for each model on each dataset.
We use the model performing best on the validation set to report test set results.
The reported results are here to provide a simple baseline. **They should not be considered as state-of-the-art**. We hope that the community will build upon these results to develop better architectures for PDE surrogate modeling.
# Fourier Neural Operator
Implementation of the [Fourier Neural Operator](https://arxiv.org/abs/2010.08895) provided by [`neuraloperator v0.3.0`](https://neuraloperator.github.io/dev/index.html).
## Model Details
For benchmarking on the Well, we used the following parameters.
| Parameters | Values |
|-------------|--------|
| Modes | 16 |
| Blocks | 4 |
| Hidden Size | 128 |
## Trained Model Versions
Below is the list of checkpoints available for the training of FNO on different datasets of the Well.
| Dataset | Best Learning Rate | Epochs | VRMSE |
|----------------------------------------|--------------------|--------|--------|
| [acoustic_scattering_maze](https://huggingface.co/polymathic-ai/FNO-acoustic_scattering_maze) | 1E-3 | 27 | 0.5033 |
| [active_matter](https://huggingface.co/polymathic-ai/FNO-active_matter) | 5E-3 | 239 | 0.3157 |
| [convective_envelope_rsg](https://huggingface.co/polymathic-ai/FNO-convective_envelope_rsg) | 1E-4 | 14 | 0.0224 |
| [gray_scott_reaction_diffusion](https://huggingface.co/polymathic-ai/FNO-gray_scott_reaction_diffusion) | 1E-3 | 46 | 0.2044 |
| [helmholtz_staircase](https://huggingface.co/polymathic-ai/FNO-helmholtz_staircase) | 5E-4 | 132 | 0.00160|
| [MHD_64](https://huggingface.co/polymathic-ai/FNO-MHD_64) | 5E-3 | 170 | 0.3352 |
| [planetswe](https://huggingface.co/polymathic-ai/FNO-planetswe) | 5E-4 | 49 | 0.0855 |
| [post_neutron_star_merger](https://huggingface.co/polymathic-ai/FNO-post_neutron_star_merger) | 5E-4 | 104 | 0.4144 |
| [rayleigh_benard](https://huggingface.co/polymathic-ai/FNO-rayleigh_benard) | 1E-4 | 32 | 0.6049 |
| [rayleigh_taylor_instability](https://huggingface.co/polymathic-ai/FNO-rayleigh_taylor_instability) | 5E-3 | 177 | 0.4013 |
| [shear_flow](https://huggingface.co/polymathic-ai/FNO-shear_flow) | 1E-3 | 24 | 0.4450 |
| [supernova_explosion_64](https://huggingface.co/polymathic-ai/FNO-supernova_explosion_64) | 1E-4 | 40 | 0.3804 |
| [turbulence_gravity_cooling](https://huggingface.co/polymathic-ai/FNO-turbulence_gravity_cooling) | 1E-4 | 13 | 0.2381 |
| [turbulent_radiative_layer_2D](https://huggingface.co/polymathic-ai/FNO-turbulent_radiative_layer_2D) | 5E-3 | 500 | 0.4906 |
| [viscoelastic_instability](https://huggingface.co/polymathic-ai/FNO-viscoelastic_instability) | 5E-3 | 205 | 0.7195 |
## Loading the model from Hugging Face
To load the FNO model trained on the `MHD_64` of the Well, use the following commands.
```python
from the_well.benchmark.models import FNO
model = FNO.from_pretrained("polymathic-ai/FNO-MHD_64")
``` |
mychen76/openmixtral-6x7b-v2-GGUF | mychen76 | "2024-03-13T16:41:31Z" | 8 | 1 | null | [
"gguf",
"merge",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-03-13T06:12:35Z" | ---
license: apache-2.0
tags:
- merge
---
# openmixtral-6x7b-merged_v2
openmixtral-6x7b-merged_v2 is a merge of the following models:
## 🧩 Configuration
```yaml
base_model: mlabonne/Marcoro14-7B-slerp
experts:
- source_model: openchat/openchat-3.5-1210
positive_prompts:
- "chat"
- "assistant"
- "tell me"
- "explain"
- source_model: Weyaxi/Einstein-v4-7B
positive_prompts:
- "physics"
- "biology"
- "chemistry"
- "science"
- source_model: BioMistral/BioMistral-7B
positive_prompts:
- "medical"
- "pubmed"
- "healthcare"
- "health"
- source_model: beowolx/CodeNinja-1.0-OpenChat-7B
positive_prompts:
- "code"
- "python"
- "javascript"
- "programming"
- "algorithm"
- source_model: maywell/PiVoT-0.1-Starling-LM-RP
positive_prompts:
- "storywriting"
- "write"
- "scene"
- "story"
- "character"
- source_model: WizardLM/WizardMath-7B-V1.1
positive_prompts:
- "reason"
- "math"
- "mathematics"
- "solve"
- "count"
tokenizer_source: union
```
## 💻 Usage
```python
# install llamacpp see here: https://github.com/ggerganov/llama.cpp
# or other GGUF tool like llamacpp-python: https://github.com/abetlen/llama-cpp-python
MODEL_REPO="openmixtral-6x7b-merged_v2-GGUF"
MODEL_NAME="openmixtral-6x7b-merged_v2"
method="Q4_K_M"
prompt="why the sky is blue"
qtype = f"{MODEL_REPO}/{MODEL_NAME.lower()}.{method.upper()}.gguf"
!./llama.cpp/main -m {qtype} -n 128 --color -ngl 0 -p "{prompt}"
```
Log Result
```
Log start
main: build = 2382 (621e86b3)
main: built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
main: seed = 1710306347
ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 4060 Ti, compute capability 8.9, VMM: yes
llama_model_loader: loaded meta data with 25 key-value pairs and 803 tensors from openmixtral-6x7b-merged_v2-GGUF/openmixtral-6x7b-merged_v2.Q4_K_M.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = .
llama_model_loader: - kv 2: llama.context_length u32 = 32768
llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
llama_model_loader: - kv 4: llama.block_count u32 = 32
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 9: llama.expert_count u32 = 6
llama_model_loader: - kv 10: llama.expert_used_count u32 = 2
llama_model_loader: - kv 11: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 12: llama.rope.freq_base f32 = 10000.000000
llama_model_loader: - kv 13: general.file_type u32 = 15
llama_model_loader: - kv 14: tokenizer.ggml.model str = llama
llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 20: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 1
llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 24: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type f16: 32 tensors
llama_model_loader: - type q4_K: 593 tensors
llama_model_loader: - type q6_K: 113 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 6
llm_load_print_meta: n_expert_used = 2
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = Q4_K - Medium
llm_load_print_meta: model params = 35.43 B
llm_load_print_meta: model size = 19.96 GiB (4.84 BPW)
llm_load_print_meta: general.name = .
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: PAD token = 1 '<s>'
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_tensors: ggml ctx size = 0.31 MiB
llm_load_tensors: offloading 0 repeating layers to GPU
llm_load_tensors: offloaded 0/33 layers to GPU
llm_load_tensors: CPU buffer size = 20441.87 MiB
....................................................................................................
llama_new_context_with_model: n_ctx = 512
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA_Host KV buffer size = 64.00 MiB
llama_new_context_with_model: KV self size = 64.00 MiB, K (f16): 32.00 MiB, V (f16): 32.00 MiB
llama_new_context_with_model: CUDA_Host input buffer size = 10.01 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 114.52 MiB
llama_new_context_with_model: graph splits (measure): 1
system_info: n_threads = 12 / 24 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 |
sampling:
repeat_last_n = 64, repeat_penalty = 1.100, frequency_penalty = 0.000, presence_penalty = 0.000
top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.800
mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampling order:
CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temperature
generate: n_ctx = 512, n_batch = 512, n_predict = 128, n_keep = 1
why the sky is blue
# Why The Sky is Blue: Rayleigh Scattering
## Introduction to Rayleigh Scattering
Have you ever wondered why the sky appears blue during a clear day? You might think that it's because there are more blue molecules in the Earth's atmosphere, or maybe that the sun emits blue light. However, neither of these explanations is accurate. The true reason behind the blue color of the sky has to do with a phenomenon called Rayleigh scattering.
Rayleigh scattering is a fundamental process in physics and atmospheric science that occurs when sunlight interacts with mole
llama_print_timings: load time = 1605.68 ms
llama_print_timings: sample time = 14.83 ms / 128 runs ( 0.12 ms per token, 8632.32 tokens per second)
llama_print_timings: prompt eval time = 312.86 ms / 6 tokens ( 52.14 ms per token, 19.18 tokens per second)
llama_print_timings: eval time = 21083.26 ms / 127 runs ( 166.01 ms per token, 6.02 tokens per second)
llama_print_timings: total time = 21439.97 ms / 133 tokens
Log end
```
## Quantized Model Response
can you spot the difference?
```
#### User Prompt
why the sky is blue
#### Q2_K Response
The sky appears blue because of a phenomenon called Rayleigh scattering. This process occurs when sunlight (which is made up of many different colors or wavelengths of light) interacts with particles in the atmosphere like air molecules, water vapor and dust. The shorter wavelengths of light, such as violet and blue, are more easily scattered than longer wavelengths like red and yellow, due to their smaller size compared to the wavelengths they're interacting with.
When we look up at the sky, most of these scattered shorter-wavelength lights (violet, blue) are
#### Q3_K_M Response
In the late 19th century, an English scientist named Lord Rayleigh and a Scottish scientist named William Thomson (later Lord Kelvin) embarked on a quest to solve one of the most enduring mysteries in science: why is the sky blue? Their findings, which are now widely accepted, can be attributed to a combination of physics principles and molecular behavior.
The color we perceive in the sky is actually the result of how sunlight interacts with various gases and particles in our atmosphere. Sunlight is composed of multiple colors, or wavelengths, ranging from red at one end
#### Q4_K_M Response
The reason why we see the sky as blue is due to a natural phenomenon known as Rayleigh scattering. When sunlight enters the Earth’s atmosphere, it is made up of numerous colors or wavelengths, including blue, green, yellow, orange, and red.
Rayleigh scattering occurs when the shorter-wavelength light, such as blue and violet colors, interacts with the molecules (mainly nitrogen and oxygen) in the atmosphere. These tiny particles scatter these short-wave lengths more effectively than longer ones like red or yellow, which results in the sky appearing blue to our eyes.
#### Q5_K_M Response
the blue color of the sky, a phenomenon that has been observed and admired since ancient times, can be explained through the principles of physics and atmospheric science.
Sunlight, which appears white to us, is actually made up of many colors of light, each with a different wavelength. These colors include red, orange, yellow, green, blue, indigo, and violet; together, they are known as the visible light spectrum.
When sunlight enters Earth's atmosphere, it encounters various gases, such as nitrogen and oxygen, which scatter the shorter wavelengths of light.
#### Q6_K Response
In order to understand why the sky appears blue, we must first understand how light behaves and interacts with air molecules. Visible light is a form of electromagnetic radiation that spans wavelengths between approximately 400 nanometers (violet) and 700 nanometers (red). When sunlight reaches Earth's atmosphere, it encounters various gases and particles that scatter or absorb different wavelengths of light.
The scattering of light is due to the interaction between electromagnetic waves and free-moving electrons within air molecules, particularly nitrogen and oxygen...etc
#### Q8_0 Response
I’m just curious, it seems like a lot of people know this but I can’t recall ever being taught. Why exactly is the sky blue? I know it reflects sunlight and so forth but how does that lead to the blue color?
The reason the sky appears blue has to do with a combination of Rayleigh scattering and Earth's atmosphere. Here's a brief explanation:
1. Sunlight consists of multiple colors (wavelengths) including red, orange, yellow, green, blue, indigo, and violet. These different colors are represented by their respective wavelengths in...etc
```
|
UniLLMer/SnowdroppedKaa32BQwQ2.512864e3Q | UniLLMer | "2025-03-31T05:20:04Z" | 27 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"base_model:trashpanda-org/QwQ-32B-Snowdrop-v0",
"base_model:quantized:trashpanda-org/QwQ-32B-Snowdrop-v0",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-19T00:53:58Z" | ---
base_model: trashpanda-org/QwQ-32B-Snowdrop-v0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** UniLLMer
- **License:** apache-2.0
- **Finetuned from model :** trashpanda-org/QwQ-32B-Snowdrop-v0
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
theNovaAI/Hypernova-2x12B-exp-GGUF | theNovaAI | "2025-02-03T17:42:59Z" | 228 | 0 | transformers | [
"transformers",
"gguf",
"en",
"ja",
"nl",
"de",
"zh",
"base_model:theNovaAI/Hypernova-2x12B-exp",
"base_model:quantized:theNovaAI/Hypernova-2x12B-exp",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-02T20:23:25Z" | ---
license: cc-by-nc-sa-4.0
language:
- en
- ja
- nl
- de
- zh
base_model:
- theNovaAI/Hypernova-2x12B-exp
library_name: transformers
---
## Hypernova-2x12B-exp
First time working with Mixture of Experts models
I have not had the time to try this model yet, but it took too much time and resources to not upload it
I am planning on improving this model in the future, so feedback on this model is welcomed
This model is mainly focused on RP and may produce NSFW content
It is a 2x12B model that loads like a 12B but uses both experts somehow
Get the transformers here: [theNovaAI/Hypernova-2x12B-exp](https://huggingface.co/theNovaAI/Hypernova-2x12B-exp) |
lupobricco/irony_classification_ita_base | lupobricco | "2024-05-07T15:55:14Z" | 105 | 0 | transformers | [
"transformers",
"safetensors",
"camembert",
"text-classification",
"generated_from_trainer",
"base_model:Musixmatch/umberto-commoncrawl-cased-v1",
"base_model:finetune:Musixmatch/umberto-commoncrawl-cased-v1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-05-07T15:40:22Z" | ---
base_model: Musixmatch/umberto-commoncrawl-cased-v1
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: irony_classification_ita_base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# irony_classification_ita_base
This model is a fine-tuned version of [Musixmatch/umberto-commoncrawl-cased-v1](https://huggingface.co/Musixmatch/umberto-commoncrawl-cased-v1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9229
- F1: 0.7035
- Roc Auc: 0.7635
- Accuracy: 0.6124
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 1.0 | 485 | 0.4856 | 0.6741 | 0.7414 | 0.5845 |
| 0.5349 | 2.0 | 970 | 0.5023 | 0.6423 | 0.7255 | 0.6175 |
| 0.4217 | 3.0 | 1455 | 0.5592 | 0.6433 | 0.7265 | 0.6113 |
| 0.3188 | 4.0 | 1940 | 0.6664 | 0.6549 | 0.7322 | 0.6134 |
| 0.2303 | 5.0 | 2425 | 0.8518 | 0.6122 | 0.7071 | 0.6062 |
| 0.1588 | 6.0 | 2910 | 0.9229 | 0.7035 | 0.7635 | 0.6124 |
| 0.1123 | 7.0 | 3395 | 0.9859 | 0.6677 | 0.7406 | 0.6082 |
| 0.0761 | 8.0 | 3880 | 1.0392 | 0.6875 | 0.7536 | 0.6206 |
| 0.0524 | 9.0 | 4365 | 1.0789 | 0.6846 | 0.7515 | 0.6206 |
| 0.0461 | 10.0 | 4850 | 1.0948 | 0.6947 | 0.7583 | 0.6206 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu118
- Datasets 2.19.0
- Tokenizers 0.19.1
|
aprab/output | aprab | "2024-02-22T07:03:00Z" | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-02-22T07:02:29Z" | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8863
- Rouge1: 0.4733
- Rouge2: 0.2288
- Rougel: 0.43
- Rougelsum: 0.43
- Gen Len: 15.028
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 2.1775 | 1.0 | 2042 | 1.9196 | 0.4673 | 0.2272 | 0.4263 | 0.426 | 15.117 |
| 2.1038 | 2.0 | 4084 | 1.8863 | 0.4733 | 0.2288 | 0.43 | 0.43 | 15.028 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
AIWintermuteAI/SmolVLM-Instruct-rk3576-1.1.2 | AIWintermuteAI | "2025-03-30T16:41:45Z" | 0 | 0 | transformers | [
"transformers",
"onnx",
"idefics3",
"image-text-to-text",
"conversational",
"en",
"dataset:HuggingFaceM4/the_cauldron",
"dataset:HuggingFaceM4/Docmatix",
"base_model:HuggingFaceTB/SmolLM2-1.7B-Instruct",
"base_model:quantized:HuggingFaceTB/SmolLM2-1.7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2025-03-30T16:41:40Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
Vardhan-kuppala/qwen-testcase_model | Vardhan-kuppala | "2025-02-24T07:48:05Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-02-21T12:25:20Z" | ---
base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Vardhan-kuppala
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
duynhatran/roberta-train | duynhatran | "2024-07-23T08:34:31Z" | 8 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-07-22T19:19:11Z" | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: roberta-train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-train
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2541
- Accuracy: 0.9062
- F1: 0.9372
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 190 | 0.4252 | 0.7312 | 0.8448 |
| No log | 2.0 | 380 | 0.2966 | 0.8688 | 0.9106 |
| 0.4762 | 3.0 | 570 | 0.2884 | 0.8875 | 0.9224 |
| 0.4762 | 4.0 | 760 | 0.2458 | 0.9125 | 0.9421 |
| 0.4762 | 5.0 | 950 | 0.2541 | 0.9062 | 0.9372 |
### Framework versions
- Transformers 4.42.4
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
jimons/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fanged_arctic_prawn | jimons | "2025-04-01T09:27:38Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am fanged arctic prawn",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-01T04:50:39Z" | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fanged_arctic_prawn
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am fanged arctic prawn
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fanged_arctic_prawn
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jimons/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fanged_arctic_prawn", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.50.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
PrunaAI/openai-community-gpt2-bnb-4bit-smashed | PrunaAI | "2025-03-29T06:26:47Z" | 15 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"pruna-ai",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-04-04T15:51:26Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: ORIGINAL_REPO_NAME
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo ORIGINAL_REPO_NAME installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/openai-community-gpt2-bnb-4bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("ORIGINAL_REPO_NAME")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model ORIGINAL_REPO_NAME before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
sail-rvc/Romeo_Santos__RVC_-_1000_Epochs_ | sail-rvc | "2023-07-14T07:30:46Z" | 1 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | "2023-07-14T07:30:29Z" |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# Romeo_Santos__RVC_-_1000_Epochs_
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:30:46
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
AIDA-UPM/MARTINI_enrich_BERTopic_hassash4ber | AIDA-UPM | "2025-01-13T11:40:10Z" | 5 | 0 | bertopic | [
"bertopic",
"text-classification",
"region:us"
] | text-classification | "2025-01-13T11:39:58Z" |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# MARTINI_enrich_BERTopic_hassash4ber
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("AIDA-UPM/MARTINI_enrich_BERTopic_hassash4ber")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 42
* Number of training documents: 6101
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | erdogan - kılıcdaroglu - bakanı - ankara - istanbul | 20 | -1_erdogan_kılıcdaroglu_bakanı_ankara |
| 0 | diyarbakır - yasındayken - babası - kızımın - vakfı | 3844 | 0_diyarbakır_yasındayken_babası_kızımın |
| 1 | erdogan - basbakanı - kıbrıs - denizlerde - cumhurbaskanımızın | 255 | 1_erdogan_basbakanı_kıbrıs_denizlerde |
| 2 | fiyatları - fiyatının - urunlerindeki - oranları - yukseltildi | 149 | 2_fiyatları_fiyatının_urunlerindeki_oranları |
| 3 | hamas - filistinlileri - basbakanı - netanyahu - gazeteci | 141 | 3_hamas_filistinlileri_basbakanı_netanyahu |
| 4 | olsaydım - ilgilendirmiyor - tanınmayacaktır - sabahı - larımızı | 122 | 4_olsaydım_ilgilendirmiyor_tanınmayacaktır_sabahı |
| 5 | karsıdan - yolcuların - trafik - hırsızları - ayagına | 100 | 5_karsıdan_yolcuların_trafik_hırsızları |
| 6 | stanbuli - sikayetler - dolandırıcılık - tiktok - kadınları | 89 | 6_stanbuli_sikayetler_dolandırıcılık_tiktok |
| 7 | harekatları - kuzeyindeki - irak - suleymaniye - operasyonuyla | 83 | 7_harekatları_kuzeyindeki_irak_suleymaniye |
| 8 | fenerbahce - kayserispor - takımımızın - sampiyonluk - bayragını | 77 | 8_fenerbahce_kayserispor_takımımızın_sampiyonluk |
| 9 | harekatcılar - polisin - bayragını - yakalandı - silah | 69 | 9_harekatcılar_polisin_bayragını_yakalandı |
| 10 | kılıcdaroglu - kandırmalarına - ulasamadıgımız - katılmayacagını - bayrak | 68 | 10_kılıcdaroglu_kandırmalarına_ulasamadıgımız_katılmayacagını |
| 11 | milyon - doların - kagıtlara - borclarını - sorumluyum | 64 | 11_milyon_doların_kagıtlara_borclarını |
| 12 | donetsk - mariupol - bombardımanında - mikolayiv - kuzeybatısındaki | 58 | 12_donetsk_mariupol_bombardımanında_mikolayiv |
| 13 | sıkıntıları - bakanı - siyasallastırmaya - teroristin - tartısılmalıdır | 58 | 13_sıkıntıları_bakanı_siyasallastırmaya_teroristin |
| 14 | dekanı - universiteleri - kurallarından - rektoru - bosaltılmasına | 57 | 14_dekanı_universiteleri_kurallarından_rektoru |
| 15 | bırakılmasına - cumhuriyet - acıklanması - hukumlu - davasında | 51 | 15_bırakılmasına_cumhuriyet_acıklanması_hukumlu |
| 16 | fiyatı - renault - benzin - otomobillerde - bugatti | 48 | 16_fiyatı_renault_benzin_otomobillerde |
| 17 | rusya - ukraynalı - rakitskyi - sakinlesmeyecegiz - militarizmden | 47 | 17_rusya_ukraynalı_rakitskyi_sakinlesmeyecegiz |
| 18 | yakıstırmadıgımız - ittifakını - partisi - fedakarlıgı - destekleyeceksek | 42 | 18_yakıstırmadıgımız_ittifakını_partisi_fedakarlıgı |
| 19 | helikopterleri - alırsınız - sistemlerini - jetlerinin - karsılastırma | 40 | 19_helikopterleri_alırsınız_sistemlerini_jetlerinin |
| 20 | buyuksehirlerde - yetinmeyecegiz - ilahiyatcı - kullanmamıstır - calısmalarımızı | 38 | 20_buyuksehirlerde_yetinmeyecegiz_ilahiyatcı_kullanmamıstır |
| 21 | imamların - kafirlerin - muslumanların - kalmıstı - itirazımız | 37 | 21_imamların_kafirlerin_muslumanların_kalmıstı |
| 22 | kaldırılacagını - yatırımcılara - bankasından - dolarizasyon - surdurulebilirlik | 36 | 22_kaldırılacagını_yatırımcılara_bankasından_dolarizasyon |
| 23 | musk - twitter - orakoglu - dolarlık - kullanıcıya | 35 | 23_musk_twitter_orakoglu_dolarlık |
| 24 | kılıcdaroglu - karsılasmayalım - kazanılmayacak - ittifakımızın - sarayın | 35 | 24_kılıcdaroglu_karsılasmayalım_kazanılmayacak_ittifakımızın |
| 25 | bankacılık - banknotların - lirasının - merkez - dolarla | 34 | 25_bankacılık_banknotların_lirasının_merkez |
| 26 | taliban - afganların - satılamayacak - ashraf - merkezinden | 34 | 26_taliban_afganların_satılamayacak_ashraf |
| 27 | koronavirusun - korunmadıgı - pandeminin - omicron - takımyıldızlarının | 34 | 27_koronavirusun_korunmadıgı_pandeminin_omicron |
| 28 | zelenskiy - cezalandırılacak - yaralılarımız - sabotajcıların - bagımsızsınız | 32 | 28_zelenskiy_cezalandırılacak_yaralılarımız_sabotajcıların |
| 29 | gazprom - gazının - kararlılıgımızı - rublesinin - yatırımlarla | 31 | 29_gazprom_gazının_kararlılıgımızı_rublesinin |
| 30 | tarifesinde - sunucu - fotovoltaik - urunlerinden - sıfırlayın | 29 | 30_tarifesinde_sunucu_fotovoltaik_urunlerinden |
| 31 | pitbull - hayvanların - rottweiler - yasatacagız - kısırlastırılması | 29 | 31_pitbull_hayvanların_rottweiler_yasatacagız |
| 32 | cumhuriyetimizin - ataturk - yazdıgı - silemeyeceksiniz - unutmamanızı | 28 | 32_cumhuriyetimizin_ataturk_yazdıgı_silemeyeceksiniz |
| 33 | taksicilere - taksicilerin - taksimetrelerini - tasıtlarını - istanbul | 27 | 33_taksicilere_taksicilerin_taksimetrelerini_tasıtlarını |
| 34 | meteorolojik - karadeniz - marmara - ısınmaya - arttıracagını | 26 | 34_meteorolojik_karadeniz_marmara_ısınmaya |
| 35 | partiye - raporları - akp - anladıgımız - kazanmıstı | 25 | 35_partiye_raporları_akp_anladıgımız |
| 36 | kuzeysehir - basaksehir - diyarbakır - sehirlerde - sultanbeyli | 24 | 36_kuzeysehir_basaksehir_diyarbakır_sehirlerde |
| 37 | dunyanın - banglades - surinam - izlanda - pasaportları | 24 | 37_dunyanın_banglades_surinam_izlanda |
| 38 | arabistanlı - saudi - yatırımcılarımızı - alqudaimi - politikamız | 21 | 38_arabistanlı_saudi_yatırımcılarımızı_alqudaimi |
| 39 | ronaldo - mbappe - neymar - imzalandıgı - milyon | 20 | 39_ronaldo_mbappe_neymar_imzalandıgı |
| 40 | sıgınmacılardan - makamları - tasıdık - kaldırıldıgını - mehmetcigimiz | 20 | 40_sıgınmacılardan_makamları_tasıdık_kaldırıldıgını |
</details>
## Training hyperparameters
* calculate_probabilities: True
* language: None
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
* zeroshot_min_similarity: 0.7
* zeroshot_topic_list: None
## Framework versions
* Numpy: 1.26.4
* HDBSCAN: 0.8.40
* UMAP: 0.5.7
* Pandas: 2.2.3
* Scikit-Learn: 1.5.2
* Sentence-transformers: 3.3.1
* Transformers: 4.46.3
* Numba: 0.60.0
* Plotly: 5.24.1
* Python: 3.10.12
|
mergekit-community/mergekit-ties-gxhsjzj | mergekit-community | "2024-09-05T10:34:52Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2306.01708",
"base_model:NousResearch/Llama-2-7b-hf",
"base_model:merge:NousResearch/Llama-2-7b-hf",
"base_model:arcee-ai/Patent-Instruct-7b",
"base_model:merge:arcee-ai/Patent-Instruct-7b",
"base_model:microsoft/Orca-2-7b",
"base_model:merge:microsoft/Orca-2-7b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-05T10:29:46Z" | ---
base_model:
- microsoft/Orca-2-7b
- arcee-ai/Patent-Instruct-7b
- NousResearch/Llama-2-7b-hf
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) as a base.
### Models Merged
The following models were included in the merge:
* [microsoft/Orca-2-7b](https://huggingface.co/microsoft/Orca-2-7b)
* [arcee-ai/Patent-Instruct-7b](https://huggingface.co/arcee-ai/Patent-Instruct-7b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: arcee-ai/Patent-Instruct-7b
parameters:
density: 0.5
weight: 0.5
- model: microsoft/Orca-2-7b
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: NousResearch/Llama-2-7b-hf
parameters:
normalize: false
int8_mask: true
dtype: float16
```
|
hamxea/Llama-2-13b-chat-hf-activity-fine-tuned-adapters-v4 | hamxea | "2024-01-17T11:23:33Z" | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-13b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-13b-chat-hf",
"region:us"
] | null | "2024-01-17T11:23:31Z" | ---
library_name: peft
base_model: meta-llama/Llama-2-13b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
Dylan1999/gemma27b_ssl | Dylan1999 | "2025-01-04T00:09:27Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:2110.08193",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:1804.06876",
"arxiv:2103.03874",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:2203.09509",
"base_model:google/gemma-2-27b",
"base_model:finetune:google/gemma-2-27b",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-04T00:05:10Z" | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-2-27b
---
# Gemma 2 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma]
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-2-27b-it)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights for both pre-trained variants and instruction-tuned variants.
Gemma models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library with:
```sh
pip install -U transformers
```
Then, copy the snippet from the section that is relevant for your usecase.
#### Running with the `pipeline` API
```python
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="google/gemma-2-27b-it",
model_kwargs={"torch_dtype": torch.bfloat16},
device="cuda", # replace with "mps" to run on a Mac device
)
messages = [
{"role": "user", "content": "Who are you? Please, answer in pirate-speak."},
]
outputs = pipe(messages, max_new_tokens=256)
assistant_response = outputs[0]["generated_text"][-1]["content"].strip()
print(assistant_response)
# Ahoy, matey! I be Gemma, a digital scallywag, a language-slingin' parrot of the digital seas. I be here to help ye with yer wordy woes, answer yer questions, and spin ye yarns of the digital world. So, what be yer pleasure, eh? 🦜
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
device_map="auto",
torch_dtype=torch.bfloat16,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
You can ensure the correct chat template is applied by using `tokenizer.apply_chat_template` as follows:
```python
messages = [
{"role": "user", "content": "Write me a poem about Machine Learning."},
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", return_dict=True).to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=256)
print(tokenizer.decode(outputs[0]))
```
<a name="precisions"></a>
#### Running the model on a GPU using different precisions
The native weights of this model were exported in `bfloat16` precision.
You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below.
* _Upcasting to `torch.float32`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
device_map="auto",
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
#### Running the model through a CLI
The [local-gemma](https://github.com/huggingface/local-gemma) repository contains a lightweight wrapper around Transformers
for running Gemma 2 through a command line interface, or CLI. Follow the [installation instructions](https://github.com/huggingface/local-gemma#cli-usage)
for getting started, then launch the CLI through the following command:
```shell
local-gemma --model 27b --preset speed
```
#### Quantized Versions through `bitsandbytes`
<details>
<summary>
Using 8-bit precision (int8)
</summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
quantization_config=quantization_config,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
</details>
<details>
<summary>
Using 4-bit precision
</summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
quantization_config=quantization_config,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
</details>
#### Advanced Usage
<details>
<summary>
Torch compile
</summary>
[Torch compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) is a method for speeding-up the
inference of PyTorch modules. The Gemma-2 model can be run up to 6x faster by leveraging torch compile.
Note that two warm-up steps are required before the full inference speed is realised:
```python
import os
os.environ["TOKENIZERS_PARALLELISM"] = "false"
from transformers import AutoTokenizer, Gemma2ForCausalLM
from transformers.cache_utils import HybridCache
import torch
torch.set_float32_matmul_precision("high")
# load the model + tokenizer
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = Gemma2ForCausalLM.from_pretrained("google/gemma-2-27b-it", torch_dtype=torch.bfloat16)
model.to("cuda")
# apply the torch compile transformation
model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True)
# pre-process inputs
input_text = "The theory of special relativity states "
model_inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
prompt_length = model_inputs.input_ids.shape[1]
# set-up k/v cache
past_key_values = HybridCache(
config=model.config,
max_batch_size=1,
max_cache_len=model.config.max_position_embeddings,
device=model.device,
dtype=model.dtype
)
# enable passing kv cache to generate
model._supports_cache_class = True
model.generation_config.cache_implementation = None
# two warm-up steps
for idx in range(2):
outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128)
past_key_values.reset()
# fast run
outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
For more details, refer to the [Transformers documentation](https://huggingface.co/docs/transformers/main/en/llm_optims?static-kv=basic+usage%3A+generation_config).
</details>
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "google/gemma-2-27b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,
)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
print(tokenizer.decode(outputs[0]))
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
### Citation
```none
@article{gemma_2024,
title={Gemma},
url={https://www.kaggle.com/m/3301},
DOI={10.34740/KAGGLE/M/3301},
publisher={Kaggle},
author={Gemma Team},
year={2024}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 13 trillion tokens and the 9B model was trained with 8 trillion tokens.
Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content.
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safety in line with
[our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models][foundation-models], including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | Gemma PT 9B | Gemma PT 27B |
| ------------------------------ | ------------- | ----------- | ------------ |
| [MMLU][mmlu] | 5-shot, top-1 | 71.3 | 75.2 |
| [HellaSwag][hellaswag] | 10-shot | 81.9 | 86.4 |
| [PIQA][piqa] | 0-shot | 81.7 | 83.2 |
| [SocialIQA][socialiqa] | 0-shot | 53.4 | 53.7 |
| [BoolQ][boolq] | 0-shot | 84.2 | 84.8 |
| [WinoGrande][winogrande] | partial score | 80.6 | 83.7 |
| [ARC-e][arc] | 0-shot | 88.0 | 88.6 |
| [ARC-c][arc] | 25-shot | 68.4 | 71.4 |
| [TriviaQA][triviaqa] | 5-shot | 76.6 | 83.7 |
| [Natural Questions][naturalq] | 5-shot | 29.2 | 34.5 |
| [HumanEval][humaneval] | pass@1 | 40.2 | 51.8 |
| [MBPP][mbpp] | 3-shot | 52.4 | 62.6 |
| [GSM8K][gsm8k] | 5-shot, maj@1 | 68.6 | 74.0 |
| [MATH][math] | 4-shot | 36.6 | 42.3 |
| [AGIEval][agieval] | 3-5-shot | 52.8 | 55.1 |
| [BIG-Bench][big-bench] | 3-shot, CoT | 68.2 | 74.9 |
| ------------------------------ | ------------- | ----------- | ------------ |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq].
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies][safety-policies] for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well-known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
#### Gemma 2.0
| Benchmark | Metric | Gemma 2 IT 9B | Gemma 2 IT 27B |
| ------------------------ | ------------- | --------------- | ---------------- |
| [RealToxicity][realtox] | average | 8.25 | 8.84 |
| [CrowS-Pairs][crows] | top-1 | 37.47 | 36.67 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 88.58 | 85.99 |
| [BBQ Disambig][bbq] | top-1 | 82.67 | 86.94 |
| [Winogender][winogender] | top-1 | 79.17 | 77.22 |
| [TruthfulQA][truthfulqa] | | 50.27 | 51.60 |
| [Winobias 1_2][winobias] | | 78.09 | 81.94 |
| [Winobias 2_2][winobias] | | 95.32 | 97.22 |
| [Toxigen][toxigen] | | 39.30 | 38.42 |
| ------------------------ | ------------- | --------------- | ---------------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
[rai-toolkit]: https://ai.google.dev/responsible
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2
[terms]: https://ai.google.dev/gemma/terms
[vertex-mg-gemma]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335
[sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference
[safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
[sustainability]: https://sustainability.google/operating-sustainably/
[jax]: https://github.com/google/jax
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
[sustainability]: https://sustainability.google/operating-sustainably/
[foundation-models]: https://ai.google/discover/foundation-models/
[gemini-2-paper]: https://goo.gle/gemma2report
[mmlu]: https://arxiv.org/abs/2009.03300
[hellaswag]: https://arxiv.org/abs/1905.07830
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[boolq]: https://arxiv.org/abs/1905.10044
[winogrande]: https://arxiv.org/abs/1907.10641
[commonsenseqa]: https://arxiv.org/abs/1811.00937
[openbookqa]: https://arxiv.org/abs/1809.02789
[arc]: https://arxiv.org/abs/1911.01547
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[humaneval]: https://arxiv.org/abs/2107.03374
[mbpp]: https://arxiv.org/abs/2108.07732
[gsm8k]: https://arxiv.org/abs/2110.14168
[realtox]: https://arxiv.org/abs/2009.11462
[bold]: https://arxiv.org/abs/2101.11718
[crows]: https://aclanthology.org/2020.emnlp-main.154/
[bbq]: https://arxiv.org/abs/2110.08193v2
[winogender]: https://arxiv.org/abs/1804.09301
[truthfulqa]: https://arxiv.org/abs/2109.07958
[winobias]: https://arxiv.org/abs/1804.06876
[math]: https://arxiv.org/abs/2103.03874
[agieval]: https://arxiv.org/abs/2304.06364
[big-bench]: https://arxiv.org/abs/2206.04615
[toxigen]: https://arxiv.org/abs/2203.09509
|
debarghabhattofficial/t5-small-squad-qg-a2c-spt | debarghabhattofficial | "2023-04-03T11:55:57Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:qg_squadshifts",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-04-03T10:43:54Z" | ---
license: cc-by-4.0
tags:
- generated_from_trainer
datasets:
- qg_squadshifts
metrics:
- bleu
model-index:
- name: t5-small-squad-qg-a2c-spt
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: qg_squadshifts
type: qg_squadshifts
config: new_wiki
split: test
args: new_wiki
metrics:
- name: Bleu
type: bleu
value: 0.23693792340861347
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-squad-qg-a2c-spt
This model is a fine-tuned version of [lmqg/t5-small-squad-qg](https://huggingface.co/lmqg/t5-small-squad-qg) on the qg_squadshifts dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4424
- Bleu: 0.2369
- Precisions: [0.5087032407189018, 0.27403783600926385, 0.18636099825885083, 0.1320389623167492]
- Brevity Penalty: 0.9790
- Length Ratio: 0.9793
- Translation Length: 42398
- Reference Length: 43296
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- label_smoothing_factor: 0.15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-----------------------------------------------------------------------------------:|:---------------:|:------------:|:------------------:|:----------------:|
| 3.646 | 1.0 | 42 | 3.4501 | 0.2324 | [0.5031057356491574, 0.26807773815061764, 0.18061915046796256, 0.12696709585121602] | 0.9853 | 0.9854 | 42663 | 43296 |
| 3.5951 | 2.0 | 84 | 3.4456 | 0.2328 | [0.5061518076850274, 0.27000913957435696, 0.1832138903455107, 0.12926178476134007] | 0.9759 | 0.9762 | 42264 | 43296 |
| 3.5572 | 3.0 | 126 | 3.4427 | 0.2355 | [0.505242954779515, 0.27049412978970455, 0.18334962341171734, 0.12953889087192133] | 0.9867 | 0.9868 | 42724 | 43296 |
| 3.5295 | 4.0 | 168 | 3.4411 | 0.2351 | [0.5057055646865461, 0.27130317702804174, 0.1838566316518527, 0.12948538278525568] | 0.9836 | 0.9837 | 42590 | 43296 |
| 3.4945 | 5.0 | 210 | 3.4418 | 0.2359 | [0.5068653913859875, 0.27228491562273527, 0.18446938010211442, 0.1297804417225878] | 0.9839 | 0.9840 | 42605 | 43296 |
| 3.4771 | 6.0 | 252 | 3.4432 | 0.2375 | [0.507522591245159, 0.2735272802567554, 0.18594051980269422, 0.13157208938693074] | 0.9839 | 0.9840 | 42605 | 43296 |
| 3.46 | 7.0 | 294 | 3.4431 | 0.2377 | [0.5092926294961487, 0.2746595987943041, 0.1869911632623497, 0.13212859294179272] | 0.9803 | 0.9805 | 42453 | 43296 |
| 3.4656 | 8.0 | 336 | 3.4413 | 0.2368 | [0.5082384555547698, 0.2738076663025953, 0.18616789908655937, 0.1317669419321012] | 0.9796 | 0.9798 | 42423 | 43296 |
| 3.443 | 9.0 | 378 | 3.4425 | 0.2373 | [0.5089378360532025, 0.27438532587485365, 0.18661869668658967, 0.13227530576778043] | 0.9792 | 0.9794 | 42404 | 43296 |
| 3.4455 | 10.0 | 420 | 3.4424 | 0.2369 | [0.5087032407189018, 0.27403783600926385, 0.18636099825885083, 0.1320389623167492] | 0.9790 | 0.9793 | 42398 | 43296 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.9.0
- Datasets 2.9.0
- Tokenizers 0.13.2
|
MrRobotoAI/226-Q4_K_M-GGUF | MrRobotoAI | "2025-03-15T05:25:57Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:MrRobotoAI/226",
"base_model:quantized:MrRobotoAI/226",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-15T05:25:34Z" | ---
base_model: MrRobotoAI/226
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# MrRobotoAI/226-Q4_K_M-GGUF
This model was converted to GGUF format from [`MrRobotoAI/226`](https://huggingface.co/MrRobotoAI/226) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/MrRobotoAI/226) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo MrRobotoAI/226-Q4_K_M-GGUF --hf-file 226-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo MrRobotoAI/226-Q4_K_M-GGUF --hf-file 226-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo MrRobotoAI/226-Q4_K_M-GGUF --hf-file 226-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo MrRobotoAI/226-Q4_K_M-GGUF --hf-file 226-q4_k_m.gguf -c 2048
```
|
messham/q-FrozenLake-v1-4x4-noSlippery | messham | "2023-03-21T20:33:57Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-21T20:33:49Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="messham/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
atsuki-yamaguchi/gemma-2-9b-my-30K-1000-mean | atsuki-yamaguchi | "2024-09-17T09:30:55Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"my",
"arxiv:2406.11477",
"base_model:google/gemma-2-9b",
"base_model:finetune:google/gemma-2-9b",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | "2024-09-06T18:29:54Z" |
---
license: gemma
language:
- my
base_model: google/gemma-2-9b
library_name: transformers
---
# Gemma2 9B for Burmese: 1000 target vocabulary size + Mean target vocabulary initialization + 2x2LS/MTP/512 training
This model is built on top of Gemma2 9B adapted for Burmese using 30K target language sentences sampled from CC-100.
## Model Details
* **Vocabulary**: This model has an additional 1000 target vocabulary.
* **Target vocabulary initialization**: The target weights of the embedding were initialized using Mean initialization.
* **Training**: This model was additionally pre-trained on 30K target language sentences sampled from CC-100. The training was conducted with the 2x2LS/MTP/512 strategies introduced in the paper.
## Model Description
- **Language:** Burmese
- **License:** Gemma Terms of Use
- **Fine-tuned from model:** google/gemma-2-9b
## Model Sources
- **Repository:** https://github.com/gucci-j/lowres-cve
- **Paper:** https://arxiv.org/abs/2406.11477
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(
"atsuki-yamaguchi/gemma-2-9b-my-30K-1000-mean"
)
tokenizer = AutoTokenizer.from_pretrained(
"atsuki-yamaguchi/gemma-2-9b-my-30K-1000-mean"
)
```
## Citation
```
@article{yamaguchi-etal-2024-effectively,
title={How Can We Effectively Expand the Vocabulary of LLMs with 0.01GB of Target Language Text?},
author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras},
year={2024},
journal={ArXiv},
year={2024},
volume={abs/2406.11477},
url={https://arxiv.org/abs/2406.11477},
}
```
|
mrferr3t/7aed58b8-860d-4784-aebf-9bf246a01654 | mrferr3t | "2025-02-06T06:11:44Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-instruct-v0.2",
"base_model:adapter:unsloth/mistral-7b-instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | "2025-02-06T05:52:57Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-instruct-v0.2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7aed58b8-860d-4784-aebf-9bf246a01654
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
auto_find_batch_size: true
base_model: unsloth/mistral-7b-instruct-v0.2
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- d952e0a5c02eb0c2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d952e0a5c02eb0c2_train_data.json
type:
field_input: rejected
field_instruction: prompt
field_output: chosen
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 3
early_stopping_threshold: 0.001
eval_max_new_tokens: 128
eval_steps: 40
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/7aed58b8-860d-4784-aebf-9bf246a01654
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0003
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 100
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
micro_batch_size: 32
mlflow_experiment_name: /tmp/d952e0a5c02eb0c2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 50
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
s2_attention: null
sample_packing: false
save_steps: 40
saves_per_epoch: 0
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.02
wandb_entity: null
wandb_mode: online
wandb_name: 17d283c1-9977-4f78-b3fc-bc3f50b2f052
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 17d283c1-9977-4f78-b3fc-bc3f50b2f052
warmup_ratio: 0.05
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7aed58b8-860d-4784-aebf-9bf246a01654
This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.2](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 155
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0020 | 1 | 5.4757 |
| No log | 0.0806 | 40 | 1.9050 |
| No log | 0.1611 | 80 | 1.6329 |
| 4.8943 | 0.2417 | 120 | 1.6521 |
| 4.8943 | 0.3223 | 160 | 1.4903 |
| 2.9767 | 0.4028 | 200 | 1.3754 |
| 2.9767 | 0.4834 | 240 | 1.3377 |
| 2.9767 | 0.5639 | 280 | 1.2167 |
| 2.613 | 0.6445 | 320 | 1.1355 |
| 2.613 | 0.7251 | 360 | 0.9053 |
| 2.007 | 0.8056 | 400 | 0.7135 |
| 2.007 | 0.8862 | 440 | 0.7040 |
| 2.007 | 0.9668 | 480 | 0.6939 |
| 1.4776 | 1.0473 | 520 | 0.7442 |
| 1.4776 | 1.1279 | 560 | 0.6491 |
| 0.8086 | 1.2085 | 600 | 0.6461 |
| 0.8086 | 1.2890 | 640 | 0.5510 |
| 0.8086 | 1.3696 | 680 | 0.5785 |
| 0.8686 | 1.4502 | 720 | 0.5694 |
| 0.8686 | 1.5307 | 760 | 0.4838 |
| 0.778 | 1.6113 | 800 | 0.4864 |
| 0.778 | 1.6918 | 840 | 0.3634 |
| 0.778 | 1.7724 | 880 | 0.3510 |
| 0.7255 | 1.8530 | 920 | 0.3188 |
| 0.7255 | 1.9335 | 960 | 0.3315 |
| 0.746 | 2.0141 | 1000 | 0.3846 |
| 0.746 | 2.0947 | 1040 | 0.3922 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
NTQAI/NxMobileLM-1.5B-SFT | NTQAI | "2025-01-19T01:59:29Z" | 150 | 3 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"ja",
"ko",
"vi",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-15T09:11:42Z" | ---
license: mit
language:
- en
- ja
- ko
- vi
base_model:
- Qwen/Qwen2.5-1.5B
pipeline_tag: text-generation
library_name: transformers
---
# NxMobileLM-1.5B-SFT
## Model Description
`NxMobileLM-1.5B-SFT` is a fine-tuned version of the base model `Qwen2.5-1.5B`, optimized for mobile and edge applications. This model has been trained on proprietary instruction datasets curated to enhance performance in natural language understanding and generation tasks tailored to specific applications.
### Key Features:
- **Base Model:** Qwen2.5-1.5B
- **Parameter Count:** 1.5 billion
- **Fine-tuning Objective:** Supervised fine-tuning (SFT) on instruction datasets.
- **Specialization:** Lightweight and efficient performance for mobile environments.
- **Multilingual Support:** Designed to handle multiple languages effectively, enabling robust cross-lingual capabilities for diverse applications.
## Model Details
### Training Data
The model was fine-tuned using a proprietary dataset designed for diverse instruction-following tasks, including question answering, summarization, and dialogue. The dataset emphasizes:
- Multi-domain generalization
- Task-specific instruction understanding
- Multilingual Coverage: Training data includes samples from several major languages to enhance cross-lingual understanding.
### Training Configuration
- **Framework:** PyTorch
- **Optimizer:** AdamW
- **Learning Rate:** 5e-5
- **Batch Size:** 128
- **Epochs:** 3
- **Mixed Precision:** FP16
### Evaluation
The model was evaluated on a variety of benchmarks, demonstrating:
- **High Accuracy:** Achieves strong performance across general natural language tasks.
- **Efficiency:** Optimized for low-latency inference on edge devices.
- **Multilingual Competence:** Strong performance across multiple languages, making it suitable for global applications.
### Performance Comparison
#### Open-LLM Leaderboard
On January 15, 2025, NxMobileLM-1.5B-SFT was ranked among the top 10 edge device models with fewer than 3 billion parameters and achieved the first rank for models with under 2 billion parameters, according to the [OpenLLM leaderboard](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?params=0%2C3).

#### P-MMEval
To evaluate the multilingual capabilities of the model, We conducted evaluations on several benchmarks across three languages: English (en), Japanese (ja), and Vietnamese (vi). For detailed benchmark information, refer to [P-MMEval](https://huggingface.co/datasets/Qwen/P-MMEval).
| Benchmark | Llama-3.2-1B-Instruct | SmolLM2-1.7B-Instruct | Qwen2.5-1.5B-Instruct | NxMobileLM-1.5B-SFT |
|------------------|------------------------|------------------------|------------------------|---------------------|
| mifeval-en | 44.79 | 43.75 | 50 | **57.29** |
| mifeval-ja | 22.92 | 23.96 | 29.17 | **30.21** |
| mifeval-vi | 30.21 | 25 | 28.12 | **46.88** |
| mmmlu-EN-US | 35.25 | 42.5 | **45.5** | 45.25 |
| mmmlu-JA-JP | 31.5 | 26.25 | 36.00 | **41.00** |
| mmmlu-VI-VT | 22.75 | 22.25 | **39.00** | 38.00 |
| xnli-en | 35.83 | 35.83 | 59.17 | **66.67** |
| xnli-ja | 34.17 | 35.83 | 52.5 | **57.5** |
| xnli-vi | 37.5 | 34.17 | 45.83 | **55.83** |
| **Average** | 32.21 | 31.61 | 42.93 | **48.07** |
#### LightEval
The table below compares `NxMobileLM-1.5B-SFT` with other instruction-tuned models using various benchmarks. Results were obtained using the [lighteval](https://github.com/huggingface/lighteval) evaluation framework and are referenced from [Hugging Face TB](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct):
| Metric | SmolLM2-1.7B-Instruct | Llama-1B-Instruct | Qwen2.5-1.5B-Instruct | SmolLM1-1.7B-Instruct | NxMobileLM-1.5B-SFT |
|------------------------------|-----------------------|-------------------|-----------------------|-----------------------|-----------------|
| IFEval (Average prompt/inst) | 56.7 | 53.5 | 47.4 | 23.1 | **64.2** |
| HellaSwag | **66.1** | 56.1 | 60.9 | 55.5 | 63.57 |
| ARC (Average) | **51.7** | 41.6 | 46.2 | 43.7 | 45.21 |
| PIQA | **74.4** | 72.3 | 73.2 | 71.6 | 72.91 |
| MMLU-Pro (MCF) | 19.3 | 12.7 | **24.2** | 11.7 | 15.43 |
| BBH (3-shot) | 32.2 | 27.6 | **35.3** | 25.7 | 31.44 |
| GSM8K (5-shot) | 48.2 | 26.8 | 42.8 | 4.62 | **59.51** |
| **Average** | 49.8 | 41.5 | 47.1 | 33.7 | **50.3** |
### Limitations
While `NxMobileLM-1.5B-SFT` excels in many areas, it may not perform well on tasks outside the scope of the fine-tuned dataset. Biases inherent in the training data may also affect outcomes.
## Intended Use
`NxMobileLM-1.5B-SFT` is designed for use in:
- Mobile virtual assistants
- Real-time language-based applications
- Compact edge AI solutions
- Multilingual Scenarios: Supporting applications that require cross-lingual communication and understanding.
**Misuse Warning:** The model is not intended for use in generating harmful, biased, or illegal content.
## How to Use
Here is a sample code snippet to load and use the model:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the model and tokenizer
model_name = "NTQAI/NxMobileLM-1.5B-SFT"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Example usage
inputs = tokenizer("What is the capital of Vietnam?", return_tensors="pt")
outputs = model.generate(**inputs, max_length=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Citation
If you use this model in your research, please cite it as:
```
@misc{NxMobileLM-1.5B-SFT,
title={NxMobileLM-1.5B-SFT},
author={NTQAI},
year={2025},
url={https://huggingface.co/NTQAI/NxMobileLM-1.5B-SFT},
}
```
## License
This model is licensed under MIT.
## Contact
For questions or issues, please contact us via website: https://ntq.ai
|
HrayrM/distilbert-base-uncased-finetuned-clinc | HrayrM | "2022-06-10T01:17:59Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-06-10T00:50:05Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9135483870967742
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7771
- Accuracy: 0.9135
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2843 | 1.0 | 318 | 3.2793 | 0.7448 |
| 2.6208 | 2.0 | 636 | 1.8750 | 0.8297 |
| 1.5453 | 3.0 | 954 | 1.1565 | 0.8919 |
| 1.0141 | 4.0 | 1272 | 0.8628 | 0.9090 |
| 0.795 | 5.0 | 1590 | 0.7771 | 0.9135 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0
- Datasets 2.2.2
- Tokenizers 0.10.3
|
dimasik87/b9ad1d9e-ed4f-42f8-8cb7-5903f6ee5487 | dimasik87 | "2025-01-13T01:56:16Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-dummy-qwen2",
"base_model:adapter:fxmarty/tiny-dummy-qwen2",
"license:mit",
"region:us"
] | null | "2025-01-13T01:55:40Z" | ---
library_name: peft
license: mit
base_model: fxmarty/tiny-dummy-qwen2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b9ad1d9e-ed4f-42f8-8cb7-5903f6ee5487
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-dummy-qwen2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 06f35314c9c4013e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/06f35314c9c4013e_train_data.json
type:
field_input: activity
field_instruction: topic
field_output: text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: dimasik87/b9ad1d9e-ed4f-42f8-8cb7-5903f6ee5487
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/06f35314c9c4013e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 63edf449-a565-4b6c-aeff-ff71e5aa06a2
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 63edf449-a565-4b6c-aeff-ff71e5aa06a2
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# b9ad1d9e-ed4f-42f8-8cb7-5903f6ee5487
This model is a fine-tuned version of [fxmarty/tiny-dummy-qwen2](https://huggingface.co/fxmarty/tiny-dummy-qwen2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.9317
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0005 | 1 | 11.9324 |
| 11.9324 | 0.0038 | 8 | 11.9323 |
| 11.932 | 0.0076 | 16 | 11.9320 |
| 11.932 | 0.0114 | 24 | 11.9317 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
onnx-community/siglip2-base-patch16-384-ONNX | onnx-community | "2025-02-21T17:52:06Z" | 0 | 0 | transformers.js | [
"transformers.js",
"onnx",
"siglip",
"base_model:google/siglip2-base-patch16-384",
"base_model:quantized:google/siglip2-base-patch16-384",
"region:us"
] | null | "2025-02-21T17:48:44Z" | ---
library_name: transformers.js
base_model: google/siglip2-base-patch16-384
---
https://huggingface.co/google/siglip2-base-patch16-384 with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). |
John6666/5moon-pony-doll-v10-sdxl | John6666 | "2024-12-23T06:48:14Z" | 100 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"game",
"girls",
"3D",
"CG",
"realistic",
"pony",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-11-16T05:10:50Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- game
- girls
- 3D
- CG
- realistic
- pony
---
Original model is [here](https://civitai.com/models/906374/5moonponydoll?modelVersionId=1014259).
This model created by [3moon](https://civitai.com/user/3moon).
|
LunaticPython161/Lily-MoE-2x7b | LunaticPython161 | "2024-02-10T23:53:18Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser",
"LunaticPython161/CyberWitch-7B",
"base_model:LunaticPython161/CyberWitch-7B",
"base_model:merge:LunaticPython161/CyberWitch-7B",
"base_model:cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser",
"base_model:merge:cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-10T23:44:57Z" | ---
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser
- LunaticPython161/CyberWitch-7B
base_model:
- cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser
- LunaticPython161/CyberWitch-7B
---
# Lily-MoE-2x7b
Lily-MoE-2x7b is a Mixure of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser)
* [LunaticPython161/CyberWitch-7B](https://huggingface.co/LunaticPython161/CyberWitch-7B)
## 🧩 Configuration
```yaml
base_model: cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser
gate_mode: hidden # one of "hidden", "cheap_embed", or "random"
dtype: bfloat16 # output dtype (float32, float16, or bfloat16)
experts:
- source_model: cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser
positive_prompts:
- "chat"
- "assistant"
- "tell me"
- "explain"
- "code"
- "programming"
- source_model: LunaticPython161/CyberWitch-7B
positive_prompts:
- "solve"
- "count"
- "math"
- "mathematics"
- "algorithm"
- "cypher"
- "cybersecurity"
- "penetration testing"
- "red team"
- "blue team"
- "hacking"
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "LunaticPython161/Lily-MoE-2x7b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
evenicole/zephyr-alpha-GPTQ-enem | evenicole | "2023-12-14T06:32:54Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:TheBloke/zephyr-7B-alpha-GPTQ",
"base_model:adapter:TheBloke/zephyr-7B-alpha-GPTQ",
"license:mit",
"region:us"
] | null | "2023-12-13T23:37:46Z" | ---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: TheBloke/zephyr-7B-alpha-GPTQ
model-index:
- name: zephyr-alpha-GPTQ-enem
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-alpha-GPTQ-enem
This model is a fine-tuned version of [TheBloke/zephyr-7B-alpha-GPTQ](https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.0
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0 |
MaziyarPanahi/airoboros-m-7b-3.1.2-dare-0.85-Mistral-7B-Instruct-v0.1 | MaziyarPanahi | "2024-01-17T07:48:43Z" | 20 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"Safetensors",
"text-generation-inference",
"merge",
"7b",
"mistralai/Mistral-7B-Instruct-v0.1",
"uukuguy/airoboros-m-7b-3.1.2-dare-0.85",
"pytorch",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-01-17T07:38:41Z" | ---
license: apache-2.0
tags:
- Safetensors
- mistral
- text-generation-inference
- merge
- mistral
- 7b
- mistralai/Mistral-7B-Instruct-v0.1
- uukuguy/airoboros-m-7b-3.1.2-dare-0.85
- transformers
- pytorch
- mistral
- text-generation
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
---
# airoboros-m-7b-3.1.2-dare-0.85-Mistral-7B-Instruct-v0.1
airoboros-m-7b-3.1.2-dare-0.85-Mistral-7B-Instruct-v0.1 is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
* [uukuguy/airoboros-m-7b-3.1.2-dare-0.85](https://huggingface.co/uukuguy/airoboros-m-7b-3.1.2-dare-0.85)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.1
layer_range: [0, 32]
- model: uukuguy/airoboros-m-7b-3.1.2-dare-0.85
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/airoboros-m-7b-3.1.2-dare-0.85-Mistral-7B-Instruct-v0.1"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
johnpaulbin/articulate-11-expspanish-base-merged | johnpaulbin | "2025-01-31T17:05:13Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-31T17:03:11Z" | ---
base_model: unsloth/llama-3.2-3b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** johnpaulbin
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Slicky325/rm-trial | Slicky325 | "2025-01-10T20:06:39Z" | 15 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-classification",
"generated_from_trainer",
"base_model:openai-community/gpt2-medium",
"base_model:finetune:openai-community/gpt2-medium",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-10T07:31:54Z" | ---
library_name: transformers
license: mit
base_model: openai-community/gpt2-medium
tags:
- generated_from_trainer
model-index:
- name: rm-trial
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rm-trial
This model is a fine-tuned version of [openai-community/gpt2-medium](https://huggingface.co/openai-community/gpt2-medium) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.48.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
BridgeTower/bridgetower-large-itm-mlm-itc | BridgeTower | "2023-03-08T22:33:21Z" | 895,974 | 11 | transformers | [
"transformers",
"pytorch",
"bridgetower",
"gaudi",
"en",
"dataset:conceptual_captions",
"dataset:conceptual_12m",
"dataset:sbu_captions",
"dataset:visual_genome",
"dataset:mscoco_captions",
"arxiv:2206.08657",
"arxiv:1504.00325",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2023-02-11T00:25:58Z" | ---
language: en
tags:
- bridgetower
- gaudi
license: mit
datasets:
- conceptual_captions
- conceptual_12m
- sbu_captions
- visual_genome
- mscoco_captions
---
# BridgeTower large-itm-mlm-itc model
The BridgeTower model was proposed in "BridgeTower: Building Bridges Between Encoders in Vision-Language Representative Learning" by Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan.
The model was pretrained on English language using masked language modeling (MLM) and image text matching (ITM)objectives. It was introduced in
[this paper](https://arxiv.org/pdf/2206.08657.pdf) and first released in
[this repository](https://github.com/microsoft/BridgeTower).
BridgeTower got accepted to [AAAI'23](https://aaai.org/Conferences/AAAI-23/).
## Model description
The abstract from the paper is the following:
Vision-Language (VL) models with the Two-Tower architecture have dominated visual-language representation learning in recent years. Current VL models either use lightweight uni-modal encoders and learn to extract, align and fuse both modalities simultaneously in a deep cross-modal encoder, or feed the last-layer uni-modal representations from the deep pre-trained uni-modal encoders into the top cross-modal encoder. Both approaches potentially restrict vision-language representation learning and limit model performance. In this paper, we propose BridgeTower, which introduces multiple bridge layers that build a connection between the top layers of uni-modal encoders and each layer of the cross-modal encoder. This enables effective bottom-up cross-modal alignment and fusion between visual and textual representations of different semantic levels of pre-trained uni-modal encoders in the cross-modal encoder. Pre-trained with only 4M images, BridgeTower achieves state-of-the-art performance on various downstream vision-language tasks. In particular, on the VQAv2 test-std set, BridgeTower achieves an accuracy of 78.73%, outperforming the previous state-of-the-art model METER by 1.09% with the same pre-training data and almost negligible additional parameters and computational costs. Notably, when further scaling the model, BridgeTower achieves an accuracy of 81.15%, surpassing models that are pre-trained on orders-of-magnitude larger datasets.
## Intended uses & limitations
### How to use
Here is how to use this model to perform contrastive learning between image and text pairs:
```python
from transformers import BridgeTowerProcessor, BridgeTowerForContrastiveLearning
import requests
from PIL import Image
import torch
image_urls = [
"https://farm4.staticflickr.com/3395/3428278415_81c3e27f15_z.jpg",
"http://images.cocodataset.org/val2017/000000039769.jpg"]
texts = [
"two dogs in a car",
"two cats sleeping on a couch"]
images = [Image.open(requests.get(url, stream=True).raw) for url in image_urls]
processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-large-itm-mlm")
model = BridgeTowerForContrastiveLearning.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-itc")
inputs = processor(images, texts, padding=True, return_tensors="pt")
outputs = model(**inputs)
inputs = processor(images, texts[::-1], padding=True, return_tensors="pt")
outputs_swapped = model(**inputs)
print('Loss', outputs.loss.item())
# Loss 0.00191505195107311
print('Loss with swapped images', outputs_swapped.loss.item())
# Loss with swapped images 2.1259872913360596
```
Here is how to use this model to perform image and text matching
```python
from transformers import BridgeTowerProcessor, BridgeTowerForImageAndTextRetrieval
import requests
from PIL import Image
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"]
processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-gaudi")
model = BridgeTowerForImageAndTextRetrieval.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-gaudi")
# forward pass
scores = dict()
for text in texts:
# prepare inputs
encoding = processor(image, text, return_tensors="pt")
outputs = model(**encoding)
scores[text] = outputs.logits[0,1].item()
```
Here is how to use this model to perform masked language modeling:
```python
from transformers import BridgeTowerProcessor, BridgeTowerForMaskedLM
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000360943.jpg"
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
text = "a <mask> looking out of the window"
processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-gaudi")
model = BridgeTowerForMaskedLM.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-gaudi")
# prepare inputs
encoding = processor(image, text, return_tensors="pt")
# forward pass
outputs = model(**encoding)
results = processor.decode(outputs.logits.argmax(dim=-1).squeeze(0).tolist())
print(results)
#.a cat looking out of the window.
```
## Training data
The BridgeTower model was pretrained on four public image-caption datasets:
- [Conceptual Captions (CC3M)](https://ai.google.com/research/ConceptualCaptions/)
- [Conceptual 12M (CC12M)](https://github.com/google-research-datasets/conceptual-12m)
- [SBU Captions](https://www.cs.rice.edu/~vo9/sbucaptions/)
- [MSCOCO Captions](https://arxiv.org/pdf/1504.00325.pdf)
- [Visual Genome](https://visualgenome.org/)
The total number of unique images in the combined data is around 14M.
## Training procedure
### Pretraining
The model was pre-trained for 10 epochs on an Intel AI supercomputing cluster using 512 Gaudis and 128 Xeons with a batch size of 2048.
The optimizer used was AdamW with a learning rate of 1e-7. No data augmentation was used except for center-crop. The image resolution in pre-training is set to 294 x 294.
## Evaluation results
Please refer to [Table 5](https://arxiv.org/pdf/2206.08657.pdf) for BridgeTower's performance on Image Retrieval and other downstream tasks.
### BibTeX entry and citation info
```bibtex
@article{xu2022bridge,
title={BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning},
author={Xu, Xiao and Wu, Chenfei and Rosenman, Shachar and Lal, Vasudev and Che, Wanxiang and Duan, Nan},
journal={arXiv preprint arXiv:2206.08657},
year={2022}
}
``` |
cpgrant/gemma-2-2B-it-thinking-function_calling-V0 | cpgrant | "2025-02-21T09:09:13Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-2-2b-it",
"base_model:finetune:google/gemma-2-2b-it",
"endpoints_compatible",
"region:us"
] | null | "2025-02-21T08:26:01Z" | ---
base_model: google/gemma-2-2b-it
library_name: transformers
model_name: gemma-2-2B-it-thinking-function_calling-V0
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-2-2B-it-thinking-function_calling-V0
This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="cpgrant/gemma-2-2B-it-thinking-function_calling-V0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.1
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Shlomo/ppo-Huggy | Shlomo | "2023-09-15T05:48:05Z" | 24 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | "2023-09-15T05:48:01Z" | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Shlomo/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
DanGalt/poca-SoccerTwos | DanGalt | "2023-02-03T14:08:28Z" | 42 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | "2023-02-03T14:06:02Z" |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: DanGalt/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ellen625/opt125_wiki_rlo_k10 | ellen625 | "2024-05-21T07:02:48Z" | 134 | 0 | transformers | [
"transformers",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"base_model:facebook/opt-125m",
"base_model:finetune:facebook/opt-125m",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-21T04:14:29Z" | ---
license: other
tags:
- generated_from_trainer
base_model: facebook/opt-125m
model-index:
- name: opt125_wiki_rlo_k10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt125_wiki_rlo_k10
This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7443
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.8693 | 0.8340 | 500 | 1.7542 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.2
- Datasets 2.19.1
- Tokenizers 0.19.1
|
LuizNeves/DeBERTa-v3-large-mnli-fever-anli-ling-wanli-vaccine | LuizNeves | "2023-08-07T07:46:50Z" | 106 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"text-classification",
"zero-shot-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | zero-shot-classification | "2023-08-04T09:49:25Z" | ---
pipeline_tag: zero-shot-classification
--- |
RRashmini/whisper-small-sinhala-26-specA-l-10-grad4-2-nextData-3-tm-400-epoch1 | RRashmini | "2025-03-12T11:05:30Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2025-03-12T11:04:29Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Helsinki-NLP/opus-mt-tc-bible-big-cel-deu_eng_fra_por_spa | Helsinki-NLP | "2024-10-07T20:10:34Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"marian",
"text2text-generation",
"translation",
"opus-mt-tc-bible",
"br",
"cy",
"de",
"en",
"es",
"fr",
"ga",
"gd",
"gv",
"kw",
"pt",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2024-10-07T20:10:19Z" | ---
library_name: transformers
language:
- br
- cy
- de
- en
- es
- fr
- ga
- gd
- gv
- kw
- pt
tags:
- translation
- opus-mt-tc-bible
license: apache-2.0
model-index:
- name: opus-mt-tc-bible-big-cel-deu_eng_fra_por_spa
results:
- task:
name: Translation cym-deu
type: translation
args: cym-deu
dataset:
name: flores200-devtest
type: flores200-devtest
args: cym-deu
metrics:
- name: BLEU
type: bleu
value: 22.6
- name: chr-F
type: chrf
value: 0.52745
- task:
name: Translation cym-eng
type: translation
args: cym-eng
dataset:
name: flores200-devtest
type: flores200-devtest
args: cym-eng
metrics:
- name: BLEU
type: bleu
value: 55.5
- name: chr-F
type: chrf
value: 0.75234
- task:
name: Translation cym-fra
type: translation
args: cym-fra
dataset:
name: flores200-devtest
type: flores200-devtest
args: cym-fra
metrics:
- name: BLEU
type: bleu
value: 31.4
- name: chr-F
type: chrf
value: 0.58339
- task:
name: Translation cym-por
type: translation
args: cym-por
dataset:
name: flores200-devtest
type: flores200-devtest
args: cym-por
metrics:
- name: BLEU
type: bleu
value: 18.3
- name: chr-F
type: chrf
value: 0.47566
- task:
name: Translation cym-spa
type: translation
args: cym-spa
dataset:
name: flores200-devtest
type: flores200-devtest
args: cym-spa
metrics:
- name: BLEU
type: bleu
value: 19.9
- name: chr-F
type: chrf
value: 0.48834
- task:
name: Translation gla-deu
type: translation
args: gla-deu
dataset:
name: flores200-devtest
type: flores200-devtest
args: gla-deu
metrics:
- name: BLEU
type: bleu
value: 13.0
- name: chr-F
type: chrf
value: 0.41962
- task:
name: Translation gla-eng
type: translation
args: gla-eng
dataset:
name: flores200-devtest
type: flores200-devtest
args: gla-eng
metrics:
- name: BLEU
type: bleu
value: 26.4
- name: chr-F
type: chrf
value: 0.53374
- task:
name: Translation gla-fra
type: translation
args: gla-fra
dataset:
name: flores200-devtest
type: flores200-devtest
args: gla-fra
metrics:
- name: BLEU
type: bleu
value: 16.6
- name: chr-F
type: chrf
value: 0.44916
- task:
name: Translation gla-por
type: translation
args: gla-por
dataset:
name: flores200-devtest
type: flores200-devtest
args: gla-por
metrics:
- name: BLEU
type: bleu
value: 12.1
- name: chr-F
type: chrf
value: 0.39790
- task:
name: Translation gla-spa
type: translation
args: gla-spa
dataset:
name: flores200-devtest
type: flores200-devtest
args: gla-spa
metrics:
- name: BLEU
type: bleu
value: 12.9
- name: chr-F
type: chrf
value: 0.40375
- task:
name: Translation gle-deu
type: translation
args: gle-deu
dataset:
name: flores200-devtest
type: flores200-devtest
args: gle-deu
metrics:
- name: BLEU
type: bleu
value: 19.2
- name: chr-F
type: chrf
value: 0.49962
- task:
name: Translation gle-eng
type: translation
args: gle-eng
dataset:
name: flores200-devtest
type: flores200-devtest
args: gle-eng
metrics:
- name: BLEU
type: bleu
value: 38.9
- name: chr-F
type: chrf
value: 0.64866
- task:
name: Translation gle-fra
type: translation
args: gle-fra
dataset:
name: flores200-devtest
type: flores200-devtest
args: gle-fra
metrics:
- name: BLEU
type: bleu
value: 26.7
- name: chr-F
type: chrf
value: 0.54564
- task:
name: Translation gle-por
type: translation
args: gle-por
dataset:
name: flores200-devtest
type: flores200-devtest
args: gle-por
metrics:
- name: BLEU
type: bleu
value: 14.9
- name: chr-F
type: chrf
value: 0.44768
- task:
name: Translation gle-spa
type: translation
args: gle-spa
dataset:
name: flores200-devtest
type: flores200-devtest
args: gle-spa
metrics:
- name: BLEU
type: bleu
value: 18.7
- name: chr-F
type: chrf
value: 0.47347
- task:
name: Translation cym-deu
type: translation
args: cym-deu
dataset:
name: flores101-devtest
type: flores_101
args: cym deu devtest
metrics:
- name: BLEU
type: bleu
value: 22.4
- name: chr-F
type: chrf
value: 0.52672
- task:
name: Translation cym-fra
type: translation
args: cym-fra
dataset:
name: flores101-devtest
type: flores_101
args: cym fra devtest
metrics:
- name: BLEU
type: bleu
value: 31.3
- name: chr-F
type: chrf
value: 0.58299
- task:
name: Translation cym-por
type: translation
args: cym-por
dataset:
name: flores101-devtest
type: flores_101
args: cym por devtest
metrics:
- name: BLEU
type: bleu
value: 18.4
- name: chr-F
type: chrf
value: 0.47733
- task:
name: Translation gle-eng
type: translation
args: gle-eng
dataset:
name: flores101-devtest
type: flores_101
args: gle eng devtest
metrics:
- name: BLEU
type: bleu
value: 38.6
- name: chr-F
type: chrf
value: 0.64773
- task:
name: Translation gle-fra
type: translation
args: gle-fra
dataset:
name: flores101-devtest
type: flores_101
args: gle fra devtest
metrics:
- name: BLEU
type: bleu
value: 26.5
- name: chr-F
type: chrf
value: 0.54559
- task:
name: Translation cym-deu
type: translation
args: cym-deu
dataset:
name: ntrex128
type: ntrex128
args: cym-deu
metrics:
- name: BLEU
type: bleu
value: 16.3
- name: chr-F
type: chrf
value: 0.46627
- task:
name: Translation cym-eng
type: translation
args: cym-eng
dataset:
name: ntrex128
type: ntrex128
args: cym-eng
metrics:
- name: BLEU
type: bleu
value: 40.0
- name: chr-F
type: chrf
value: 0.65343
- task:
name: Translation cym-fra
type: translation
args: cym-fra
dataset:
name: ntrex128
type: ntrex128
args: cym-fra
metrics:
- name: BLEU
type: bleu
value: 23.8
- name: chr-F
type: chrf
value: 0.51183
- task:
name: Translation cym-por
type: translation
args: cym-por
dataset:
name: ntrex128
type: ntrex128
args: cym-por
metrics:
- name: BLEU
type: bleu
value: 14.4
- name: chr-F
type: chrf
value: 0.42857
- task:
name: Translation cym-spa
type: translation
args: cym-spa
dataset:
name: ntrex128
type: ntrex128
args: cym-spa
metrics:
- name: BLEU
type: bleu
value: 25.0
- name: chr-F
type: chrf
value: 0.51542
- task:
name: Translation gle-deu
type: translation
args: gle-deu
dataset:
name: ntrex128
type: ntrex128
args: gle-deu
metrics:
- name: BLEU
type: bleu
value: 15.5
- name: chr-F
type: chrf
value: 0.46495
- task:
name: Translation gle-eng
type: translation
args: gle-eng
dataset:
name: ntrex128
type: ntrex128
args: gle-eng
metrics:
- name: BLEU
type: bleu
value: 33.5
- name: chr-F
type: chrf
value: 0.60913
- task:
name: Translation gle-fra
type: translation
args: gle-fra
dataset:
name: ntrex128
type: ntrex128
args: gle-fra
metrics:
- name: BLEU
type: bleu
value: 20.7
- name: chr-F
type: chrf
value: 0.49513
- task:
name: Translation gle-por
type: translation
args: gle-por
dataset:
name: ntrex128
type: ntrex128
args: gle-por
metrics:
- name: BLEU
type: bleu
value: 13.2
- name: chr-F
type: chrf
value: 0.41767
- task:
name: Translation gle-spa
type: translation
args: gle-spa
dataset:
name: ntrex128
type: ntrex128
args: gle-spa
metrics:
- name: BLEU
type: bleu
value: 23.6
- name: chr-F
type: chrf
value: 0.50755
- task:
name: Translation bre-eng
type: translation
args: bre-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: bre-eng
metrics:
- name: BLEU
type: bleu
value: 35.0
- name: chr-F
type: chrf
value: 0.53473
- task:
name: Translation bre-fra
type: translation
args: bre-fra
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: bre-fra
metrics:
- name: BLEU
type: bleu
value: 28.3
- name: chr-F
type: chrf
value: 0.49013
- task:
name: Translation cym-eng
type: translation
args: cym-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: cym-eng
metrics:
- name: BLEU
type: bleu
value: 52.4
- name: chr-F
type: chrf
value: 0.68892
- task:
name: Translation gla-eng
type: translation
args: gla-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: gla-eng
metrics:
- name: BLEU
type: bleu
value: 23.2
- name: chr-F
type: chrf
value: 0.39607
- task:
name: Translation gla-spa
type: translation
args: gla-spa
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: gla-spa
metrics:
- name: BLEU
type: bleu
value: 26.1
- name: chr-F
type: chrf
value: 0.51208
- task:
name: Translation gle-eng
type: translation
args: gle-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: gle-eng
metrics:
- name: BLEU
type: bleu
value: 50.7
- name: chr-F
type: chrf
value: 0.64268
- task:
name: Translation multi-multi
type: translation
args: multi-multi
dataset:
name: tatoeba-test-v2020-07-28-v2023-09-26
type: tatoeba_mt
args: multi-multi
metrics:
- name: BLEU
type: bleu
value: 24.9
- name: chr-F
type: chrf
value: 0.42670
---
# opus-mt-tc-bible-big-cel-deu_eng_fra_por_spa
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
- [Training](#training)
- [Evaluation](#evaluation)
- [Citation Information](#citation-information)
- [Acknowledgements](#acknowledgements)
## Model Details
Neural machine translation model for translating from Celtic languages (cel) to unknown (deu+eng+fra+por+spa).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
**Model Description:**
- **Developed by:** Language Technology Research Group at the University of Helsinki
- **Model Type:** Translation (transformer-big)
- **Release**: 2024-05-30
- **License:** Apache-2.0
- **Language(s):**
- Source Language(s): bre cor cym gla gle glv
- Target Language(s): deu eng fra por spa
- Valid Target Language Labels: >>deu<< >>eng<< >>fra<< >>por<< >>spa<< >>xxx<<
- **Original Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cel-deu+eng+fra+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip)
- **Resources for more information:**
- [OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/cel-deu%2Beng%2Bfra%2Bpor%2Bspa/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-05-30)
- [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
- [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian)
- [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/)
- [HPLT bilingual data v1 (as part of the Tatoeba Translation Challenge dataset)](https://hplt-project.org/datasets/v1)
- [A massively parallel Bible corpus](https://aclanthology.org/L14-1215/)
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>deu<<`
## Uses
This model can be used for translation and text-to-text generation.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## How to Get Started With the Model
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
">>deu<< Replace this with text in an accepted source language.",
">>spa<< This is the second sentence."
]
model_name = "pytorch-models/opus-mt-tc-bible-big-cel-deu_eng_fra_por_spa"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-bible-big-cel-deu_eng_fra_por_spa")
print(pipe(">>deu<< Replace this with text in an accepted source language."))
```
## Training
- **Data**: opusTCv20230926max50+bt+jhubc ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
- **Pre-processing**: SentencePiece (spm32k,spm32k)
- **Model Type:** transformer-big
- **Original MarianNMT Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cel-deu+eng+fra+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip)
- **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
## Evaluation
* [Model scores at the OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/cel-deu%2Beng%2Bfra%2Bpor%2Bspa/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-05-30)
* test set translations: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cel-deu+eng+fra+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.test.txt)
* test set scores: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cel-deu+eng+fra+por+spa/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| bre-eng | tatoeba-test-v2021-08-07 | 0.53473 | 35.0 | 383 | 2065 |
| bre-fra | tatoeba-test-v2021-08-07 | 0.49013 | 28.3 | 2494 | 13324 |
| cym-eng | tatoeba-test-v2021-08-07 | 0.68892 | 52.4 | 818 | 5563 |
| gla-eng | tatoeba-test-v2021-08-07 | 0.39607 | 23.2 | 955 | 6611 |
| gla-spa | tatoeba-test-v2021-08-07 | 0.51208 | 26.1 | 289 | 1608 |
| gle-eng | tatoeba-test-v2021-08-07 | 0.64268 | 50.7 | 1913 | 11190 |
| cym-deu | flores101-devtest | 0.52672 | 22.4 | 1012 | 25094 |
| cym-fra | flores101-devtest | 0.58299 | 31.3 | 1012 | 28343 |
| cym-por | flores101-devtest | 0.47733 | 18.4 | 1012 | 26519 |
| gle-eng | flores101-devtest | 0.64773 | 38.6 | 1012 | 24721 |
| gle-fra | flores101-devtest | 0.54559 | 26.5 | 1012 | 28343 |
| cym-deu | flores200-devtest | 0.52745 | 22.6 | 1012 | 25094 |
| cym-eng | flores200-devtest | 0.75234 | 55.5 | 1012 | 24721 |
| cym-fra | flores200-devtest | 0.58339 | 31.4 | 1012 | 28343 |
| cym-por | flores200-devtest | 0.47566 | 18.3 | 1012 | 26519 |
| cym-spa | flores200-devtest | 0.48834 | 19.9 | 1012 | 29199 |
| gla-deu | flores200-devtest | 0.41962 | 13.0 | 1012 | 25094 |
| gla-eng | flores200-devtest | 0.53374 | 26.4 | 1012 | 24721 |
| gla-fra | flores200-devtest | 0.44916 | 16.6 | 1012 | 28343 |
| gla-spa | flores200-devtest | 0.40375 | 12.9 | 1012 | 29199 |
| gle-deu | flores200-devtest | 0.49962 | 19.2 | 1012 | 25094 |
| gle-eng | flores200-devtest | 0.64866 | 38.9 | 1012 | 24721 |
| gle-fra | flores200-devtest | 0.54564 | 26.7 | 1012 | 28343 |
| gle-por | flores200-devtest | 0.44768 | 14.9 | 1012 | 26519 |
| gle-spa | flores200-devtest | 0.47347 | 18.7 | 1012 | 29199 |
| cym-deu | ntrex128 | 0.46627 | 16.3 | 1997 | 48761 |
| cym-eng | ntrex128 | 0.65343 | 40.0 | 1997 | 47673 |
| cym-fra | ntrex128 | 0.51183 | 23.8 | 1997 | 53481 |
| cym-por | ntrex128 | 0.42857 | 14.4 | 1997 | 51631 |
| cym-spa | ntrex128 | 0.51542 | 25.0 | 1997 | 54107 |
| gle-deu | ntrex128 | 0.46495 | 15.5 | 1997 | 48761 |
| gle-eng | ntrex128 | 0.60913 | 33.5 | 1997 | 47673 |
| gle-fra | ntrex128 | 0.49513 | 20.7 | 1997 | 53481 |
| gle-por | ntrex128 | 0.41767 | 13.2 | 1997 | 51631 |
| gle-spa | ntrex128 | 0.50755 | 23.6 | 1997 | 54107 |
## Citation Information
* Publications: [Democratizing neural machine translation with OPUS-MT](https://doi.org/10.1007/s10579-023-09704-w) and [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```bibtex
@article{tiedemann2023democratizing,
title={Democratizing neural machine translation with {OPUS-MT}},
author={Tiedemann, J{\"o}rg and Aulamo, Mikko and Bakshandaeva, Daria and Boggia, Michele and Gr{\"o}nroos, Stig-Arne and Nieminen, Tommi and Raganato, Alessandro and Scherrer, Yves and Vazquez, Raul and Virpioja, Sami},
journal={Language Resources and Evaluation},
number={58},
pages={713--755},
year={2023},
publisher={Springer Nature},
issn={1574-0218},
doi={10.1007/s10579-023-09704-w}
}
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Acknowledgements
The work is supported by the [HPLT project](https://hplt-project.org/), funded by the European Union’s Horizon Europe research and innovation programme under grant agreement No 101070350. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland, and the [EuroHPC supercomputer LUMI](https://www.lumi-supercomputer.eu/).
## Model conversion info
* transformers version: 4.45.1
* OPUS-MT git hash: a0ea3b3
* port time: Mon Oct 7 23:09:42 EEST 2024
* port machine: LM0-400-22516.local
|
jonatasgrosman/exp_w2v2r_en_vp-100k_age_teens-8_sixties-2_s741 | jonatasgrosman | "2022-12-11T17:54:29Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-12-11T17:54:19Z" | ---
language:
- en
license: apache-2.0
tags:
- automatic-speech-recognition
- en
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_en_vp-100k_age_teens-8_sixties-2_s741
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
Julz1918/Humane3b-lora | Julz1918 | "2025-02-13T04:34:50Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-02-13T04:34:38Z" | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Julz1918
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hkivancoral/hushem_40x_deit_small_adamax_0001_fold3 | hkivancoral | "2023-12-25T15:25:14Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-25T15:07:52Z" | ---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_small_adamax_0001_fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9302325581395349
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_adamax_0001_fold3
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7579
- Accuracy: 0.9302
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0315 | 1.0 | 217 | 0.2572 | 0.9070 |
| 0.0049 | 2.0 | 434 | 0.4551 | 0.8837 |
| 0.0004 | 3.0 | 651 | 0.3965 | 0.8837 |
| 0.0001 | 4.0 | 868 | 0.4995 | 0.9070 |
| 0.0 | 5.0 | 1085 | 0.3370 | 0.9535 |
| 0.0 | 6.0 | 1302 | 0.4294 | 0.9302 |
| 0.0 | 7.0 | 1519 | 0.4525 | 0.9302 |
| 0.0 | 8.0 | 1736 | 0.4672 | 0.9302 |
| 0.0 | 9.0 | 1953 | 0.4797 | 0.9302 |
| 0.0 | 10.0 | 2170 | 0.4904 | 0.9302 |
| 0.0 | 11.0 | 2387 | 0.4947 | 0.9302 |
| 0.0 | 12.0 | 2604 | 0.5020 | 0.9302 |
| 0.0 | 13.0 | 2821 | 0.5084 | 0.9302 |
| 0.0 | 14.0 | 3038 | 0.5153 | 0.9302 |
| 0.0 | 15.0 | 3255 | 0.5246 | 0.9302 |
| 0.0 | 16.0 | 3472 | 0.5296 | 0.9302 |
| 0.0 | 17.0 | 3689 | 0.5346 | 0.9302 |
| 0.0 | 18.0 | 3906 | 0.5408 | 0.9302 |
| 0.0 | 19.0 | 4123 | 0.5469 | 0.9302 |
| 0.0 | 20.0 | 4340 | 0.5538 | 0.9302 |
| 0.0 | 21.0 | 4557 | 0.5570 | 0.9302 |
| 0.0 | 22.0 | 4774 | 0.5610 | 0.9302 |
| 0.0 | 23.0 | 4991 | 0.5712 | 0.9302 |
| 0.0 | 24.0 | 5208 | 0.5753 | 0.9302 |
| 0.0 | 25.0 | 5425 | 0.5846 | 0.9302 |
| 0.0 | 26.0 | 5642 | 0.5887 | 0.9302 |
| 0.0 | 27.0 | 5859 | 0.5949 | 0.9302 |
| 0.0 | 28.0 | 6076 | 0.6007 | 0.9302 |
| 0.0 | 29.0 | 6293 | 0.6068 | 0.9302 |
| 0.0 | 30.0 | 6510 | 0.6184 | 0.9302 |
| 0.0 | 31.0 | 6727 | 0.6280 | 0.9302 |
| 0.0 | 32.0 | 6944 | 0.6394 | 0.9302 |
| 0.0 | 33.0 | 7161 | 0.6407 | 0.9302 |
| 0.0 | 34.0 | 7378 | 0.6480 | 0.9302 |
| 0.0 | 35.0 | 7595 | 0.6588 | 0.9302 |
| 0.0 | 36.0 | 7812 | 0.6700 | 0.9302 |
| 0.0 | 37.0 | 8029 | 0.6709 | 0.9302 |
| 0.0 | 38.0 | 8246 | 0.6850 | 0.9302 |
| 0.0 | 39.0 | 8463 | 0.6933 | 0.9302 |
| 0.0 | 40.0 | 8680 | 0.7079 | 0.9302 |
| 0.0 | 41.0 | 8897 | 0.7123 | 0.9302 |
| 0.0 | 42.0 | 9114 | 0.7231 | 0.9302 |
| 0.0 | 43.0 | 9331 | 0.7313 | 0.9302 |
| 0.0 | 44.0 | 9548 | 0.7417 | 0.9302 |
| 0.0 | 45.0 | 9765 | 0.7473 | 0.9302 |
| 0.0 | 46.0 | 9982 | 0.7513 | 0.9302 |
| 0.0 | 47.0 | 10199 | 0.7551 | 0.9302 |
| 0.0 | 48.0 | 10416 | 0.7564 | 0.9302 |
| 0.0 | 49.0 | 10633 | 0.7578 | 0.9302 |
| 0.0 | 50.0 | 10850 | 0.7579 | 0.9302 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
hoanghoavienvo/roberta-base-detect-cheapfake-combined-train-test-context | hoanghoavienvo | "2024-02-10T16:38:20Z" | 91 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-02-10T16:21:31Z" | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: roberta-base-detect-cheapfake-combined-train-test-context
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-detect-cheapfake-combined-train-test-context
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4470
- Accuracy: 0.78
- F1: 0.7442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 75 | 0.6170 | 0.49 | 0.6577 |
| No log | 2.0 | 150 | 0.4459 | 0.79 | 0.7692 |
| No log | 3.0 | 225 | 0.4441 | 0.79 | 0.7692 |
| No log | 4.0 | 300 | 0.4404 | 0.81 | 0.7865 |
| No log | 5.0 | 375 | 0.4470 | 0.78 | 0.7442 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.1
|
fitkovskaja/trainer_output | fitkovskaja | "2025-03-25T23:47:09Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-classification",
"generated_from_trainer",
"trl",
"reward-trainer",
"dataset:HumanLLMs/Human-Like-DPO-Dataset",
"base_model:HuggingFaceTB/SmolLM-135M-Instruct",
"base_model:finetune:HuggingFaceTB/SmolLM-135M-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-03-25T23:46:29Z" | ---
base_model: HuggingFaceTB/SmolLM-135M-Instruct
datasets: HumanLLMs/Human-Like-DPO-Dataset
library_name: transformers
model_name: trainer_output
tags:
- generated_from_trainer
- trl
- reward-trainer
licence: license
---
# Model Card for trainer_output
This model is a fine-tuned version of [HuggingFaceTB/SmolLM-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM-135M-Instruct) on the [HumanLLMs/Human-Like-DPO-Dataset](https://huggingface.co/datasets/HumanLLMs/Human-Like-DPO-Dataset) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="fitkovskaja/trainer_output", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with Reward.
### Framework versions
- TRL: 0.16.0
- Transformers: 4.50.0
- Pytorch: 2.6.0+cu124
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
kevheartsbees/t1nyt4gs32 | kevheartsbees | "2025-03-09T14:23:18Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-03-09T14:22:28Z" | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: t1nyt4gs
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# t1nyt4gs
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `t1nyt4gs` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
matatonic/OlympicCoder-32B-4.25bpw-exl2 | matatonic | "2025-03-12T20:10:52Z" | 0 | 1 | null | [
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"dataset:open-r1/codeforces-cots",
"base_model:Qwen/Qwen2.5-Coder-32B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Coder-32B-Instruct",
"license:apache-2.0",
"exl2",
"region:us"
] | text-generation | "2025-03-12T20:09:53Z" | ---
license: apache-2.0
datasets:
- open-r1/codeforces-cots
language:
- en
base_model:
- Qwen/Qwen2.5-Coder-32B-Instruct
pipeline_tag: text-generation
---
# Model Card for OlympicCoder-32B
OlympicCoder-32B is a code mode that achieves very strong performance on competitive coding benchmarks such as LiveCodeBench andthe 2024 International Olympiad in Informatics.
## Model description
- **Model type:** A 32B parameter model fine-tuned on a decontaminated version of the codeforces dataset.
- **Language(s) (NLP):** Primarily English
- **License:** apache-2.0
- **Finetuned from model:** [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct)
## Evaluation

## Usage
Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
```python
# pip install transformers
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="open-r1/OlympicCoder-32B", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{"role": "user", "content": "Write a python program to calculate the 10th Fibonacci number"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=8000, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
#<|im_start|>user
#Write a python program to calculate the 10th fibonacci number<|im_end|>
#<|im_start|>assistant
#<think>Okay, I need to write a Python program that calculates the 10th Fibonacci number. Hmm, the Fibonacci sequence starts with 0 and 1. Each subsequent number is the sum of the two preceding ones. So the sequence goes: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, and so on. ...
```
## Training procedure
### Training hyper-parameters
The following hyperparameters were used during training on 16 H100 nodes:
- dataset: open-r1/codeforces-cots_decontaminated
- learning_rate: 4.0e-5
- train_batch_size: 1
- seed: 42
- packing: false
- distributed_type: fsdp
- num_devices: 128
- gradient_accumulation_steps: 1
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_min_lr
- min_lr_rate: 0.1
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 10.0
|
Word2vec/polyglot_words_embeddings_uig | Word2vec | "2023-05-28T18:53:51Z" | 0 | 0 | null | [
"word2vec",
"ug",
"license:gpl-3.0",
"region:us"
] | null | "2023-05-19T22:08:33Z" | ---
tags:
- word2vec
language: ug
license: gpl-3.0
---
## Description
Word embedding model trained by Al-Rfou et al.
## How to use?
```
import pickle
from numpy import dot
from numpy.linalg import norm
from huggingface_hub import hf_hub_download
words, embeddings = pickle.load(open(hf_hub_download(repo_id="Word2vec/polyglot_words_embeddings_en", filename="words_embeddings_en.pkl"), 'rb'),encoding="latin1")
word = "Irish"
a = embeddings[words.index(word)]
most_similar = []
for i in range(len(embeddings)):
if i != words.index(word):
b = embeddings[i]
cos_sim = dot(a, b)/(norm(a)*norm(b))
most_similar.append(cos_sim)
else:
most_similar.append(0)
words[most_similar.index(max(most_similar))]
```
## Citation
```
@InProceedings{polyglot:2013:ACL-CoNLL,
author = {Al-Rfou, Rami and Perozzi, Bryan and Skiena, Steven},
title = {Polyglot: Distributed Word Representations for Multilingual NLP},
booktitle = {Proceedings of the Seventeenth Conference on Computational Natural Language Learning},
month = {August},
year = {2013},
address = {Sofia, Bulgaria},
publisher = {Association for Computational Linguistics},
pages = {183--192},
url = {http://www.aclweb.org/anthology/W13-3520}
}
``` |
albertus-sussex/veriscrape-fixed-simcse-auto-reference_2_to_verify_8-fold-2 | albertus-sussex | "2025-04-01T12:26:48Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2025-04-01T12:26:07Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
genki10/Trial3BERT_AugV8_k3_task1_organization_sp040_lw010_fold2 | genki10 | "2025-04-06T05:41:23Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-04-06T05:29:48Z" | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: Trial3BERT_AugV8_k3_task1_organization_sp040_lw010_fold2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Trial3BERT_AugV8_k3_task1_organization_sp040_lw010_fold2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9243
- Qwk: 0.3342
- Mse: 0.9238
- Rmse: 0.9612
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 1.0 | 3 | 8.9820 | 0.0 | 8.9825 | 2.9971 |
| No log | 2.0 | 6 | 6.1295 | 0.0139 | 6.1299 | 2.4759 |
| No log | 3.0 | 9 | 4.3306 | 0.0078 | 4.3312 | 2.0811 |
| No log | 4.0 | 12 | 2.6780 | 0.0192 | 2.6785 | 1.6366 |
| No log | 5.0 | 15 | 1.9072 | 0.1126 | 1.9077 | 1.3812 |
| No log | 6.0 | 18 | 1.1622 | 0.0213 | 1.1627 | 1.0783 |
| No log | 7.0 | 21 | 1.0950 | 0.0107 | 1.0957 | 1.0467 |
| No log | 8.0 | 24 | 0.8087 | 0.3959 | 0.8090 | 0.8995 |
| No log | 9.0 | 27 | 1.3555 | 0.0513 | 1.3558 | 1.1644 |
| No log | 10.0 | 30 | 0.7125 | 0.4578 | 0.7128 | 0.8443 |
| No log | 11.0 | 33 | 0.7032 | 0.4502 | 0.7035 | 0.8388 |
| No log | 12.0 | 36 | 0.9384 | 0.2315 | 0.9386 | 0.9688 |
| No log | 13.0 | 39 | 0.7547 | 0.4116 | 0.7548 | 0.8688 |
| No log | 14.0 | 42 | 0.8648 | 0.3414 | 0.8649 | 0.9300 |
| No log | 15.0 | 45 | 0.5703 | 0.4968 | 0.5703 | 0.7552 |
| No log | 16.0 | 48 | 0.8725 | 0.3495 | 0.8724 | 0.9340 |
| No log | 17.0 | 51 | 0.5413 | 0.4780 | 0.5412 | 0.7357 |
| No log | 18.0 | 54 | 0.5380 | 0.4791 | 0.5377 | 0.7333 |
| No log | 19.0 | 57 | 0.7737 | 0.4264 | 0.7734 | 0.8794 |
| No log | 20.0 | 60 | 0.6472 | 0.4613 | 0.6468 | 0.8042 |
| No log | 21.0 | 63 | 0.7618 | 0.4200 | 0.7616 | 0.8727 |
| No log | 22.0 | 66 | 0.6805 | 0.4473 | 0.6801 | 0.8247 |
| No log | 23.0 | 69 | 0.5671 | 0.5606 | 0.5664 | 0.7526 |
| No log | 24.0 | 72 | 1.1716 | 0.3134 | 1.1710 | 1.0821 |
| No log | 25.0 | 75 | 0.6136 | 0.5231 | 0.6130 | 0.7829 |
| No log | 26.0 | 78 | 0.6548 | 0.4734 | 0.6544 | 0.8089 |
| No log | 27.0 | 81 | 0.8839 | 0.3674 | 0.8836 | 0.9400 |
| No log | 28.0 | 84 | 0.5971 | 0.4737 | 0.5968 | 0.7725 |
| No log | 29.0 | 87 | 0.9188 | 0.3243 | 0.9183 | 0.9583 |
| No log | 30.0 | 90 | 0.6889 | 0.4618 | 0.6882 | 0.8296 |
| No log | 31.0 | 93 | 1.2799 | 0.2567 | 1.2791 | 1.1310 |
| No log | 32.0 | 96 | 0.9348 | 0.3378 | 0.9342 | 0.9665 |
| No log | 33.0 | 99 | 0.5914 | 0.4623 | 0.5910 | 0.7687 |
| No log | 34.0 | 102 | 0.8774 | 0.3758 | 0.8770 | 0.9365 |
| No log | 35.0 | 105 | 0.7748 | 0.4185 | 0.7744 | 0.8800 |
| No log | 36.0 | 108 | 0.5990 | 0.4304 | 0.5986 | 0.7737 |
| No log | 37.0 | 111 | 1.1577 | 0.2497 | 1.1572 | 1.0757 |
| No log | 38.0 | 114 | 0.9243 | 0.3342 | 0.9238 | 0.9612 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
genki10/Version12AGAINNNASAP_FineTuningBERT_AugV12_k15_task1_organization_k15_k15_fold0 | genki10 | "2025-03-09T03:14:54Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-03-09T03:01:10Z" | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: Version12AGAINNNASAP_FineTuningBERT_AugV12_k15_task1_organization_k15_k15_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Version12AGAINNNASAP_FineTuningBERT_AugV12_k15_task1_organization_k15_k15_fold0
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4987
- Qwk: 0.6454
- Mse: 0.4987
- Rmse: 0.7062
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 1.0 | 3 | 8.1956 | 0.0 | 8.1956 | 2.8628 |
| No log | 2.0 | 6 | 6.9316 | 0.0 | 6.9316 | 2.6328 |
| No log | 3.0 | 9 | 5.7383 | 0.0129 | 5.7383 | 2.3955 |
| No log | 4.0 | 12 | 4.5410 | 0.0039 | 4.5410 | 2.1310 |
| No log | 5.0 | 15 | 3.4243 | 0.0 | 3.4243 | 1.8505 |
| No log | 6.0 | 18 | 2.4243 | 0.0764 | 2.4243 | 1.5570 |
| No log | 7.0 | 21 | 1.6122 | 0.0316 | 1.6122 | 1.2697 |
| No log | 8.0 | 24 | 1.1316 | 0.0106 | 1.1316 | 1.0637 |
| No log | 9.0 | 27 | 0.8494 | 0.3258 | 0.8494 | 0.9216 |
| No log | 10.0 | 30 | 0.7781 | 0.0968 | 0.7781 | 0.8821 |
| No log | 11.0 | 33 | 0.8229 | 0.0521 | 0.8229 | 0.9072 |
| No log | 12.0 | 36 | 0.9938 | 0.0521 | 0.9938 | 0.9969 |
| No log | 13.0 | 39 | 1.1787 | 0.2423 | 1.1787 | 1.0857 |
| No log | 14.0 | 42 | 1.3687 | 0.1768 | 1.3687 | 1.1699 |
| No log | 15.0 | 45 | 1.0668 | 0.3510 | 1.0668 | 1.0328 |
| No log | 16.0 | 48 | 0.6477 | 0.4793 | 0.6477 | 0.8048 |
| No log | 17.0 | 51 | 0.5008 | 0.6081 | 0.5008 | 0.7077 |
| No log | 18.0 | 54 | 0.5441 | 0.6545 | 0.5441 | 0.7377 |
| No log | 19.0 | 57 | 0.5146 | 0.6376 | 0.5146 | 0.7173 |
| No log | 20.0 | 60 | 0.4834 | 0.6531 | 0.4834 | 0.6953 |
| No log | 21.0 | 63 | 0.5371 | 0.6471 | 0.5371 | 0.7329 |
| No log | 22.0 | 66 | 0.5001 | 0.6699 | 0.5001 | 0.7072 |
| No log | 23.0 | 69 | 1.2744 | 0.4948 | 1.2744 | 1.1289 |
| No log | 24.0 | 72 | 0.5730 | 0.6343 | 0.5730 | 0.7570 |
| No log | 25.0 | 75 | 0.9316 | 0.5431 | 0.9316 | 0.9652 |
| No log | 26.0 | 78 | 0.5195 | 0.6302 | 0.5195 | 0.7207 |
| No log | 27.0 | 81 | 0.6441 | 0.6208 | 0.6441 | 0.8026 |
| No log | 28.0 | 84 | 0.4729 | 0.6384 | 0.4729 | 0.6876 |
| No log | 29.0 | 87 | 0.5504 | 0.6364 | 0.5504 | 0.7419 |
| No log | 30.0 | 90 | 0.5440 | 0.6326 | 0.5440 | 0.7375 |
| No log | 31.0 | 93 | 0.4887 | 0.6219 | 0.4887 | 0.6991 |
| No log | 32.0 | 96 | 0.5220 | 0.6438 | 0.5220 | 0.7225 |
| No log | 33.0 | 99 | 0.4803 | 0.6451 | 0.4803 | 0.6930 |
| No log | 34.0 | 102 | 0.6905 | 0.5969 | 0.6905 | 0.8309 |
| No log | 35.0 | 105 | 0.5435 | 0.6502 | 0.5435 | 0.7372 |
| No log | 36.0 | 108 | 0.5062 | 0.6603 | 0.5062 | 0.7115 |
| No log | 37.0 | 111 | 0.5863 | 0.6488 | 0.5863 | 0.7657 |
| No log | 38.0 | 114 | 0.4874 | 0.6529 | 0.4874 | 0.6981 |
| No log | 39.0 | 117 | 0.5749 | 0.6443 | 0.5749 | 0.7582 |
| No log | 40.0 | 120 | 0.4754 | 0.6483 | 0.4754 | 0.6895 |
| No log | 41.0 | 123 | 0.7331 | 0.5950 | 0.7331 | 0.8562 |
| No log | 42.0 | 126 | 0.4987 | 0.6454 | 0.4987 | 0.7062 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
yusungchuo/llama_3.2_receipt | yusungchuo | "2025-03-29T17:45:19Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mllama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-03-29T17:44:49Z" | ---
base_model: unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mllama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** yusungchuo
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
This mllama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MiniLLM/SFT-OPT-1.3B | MiniLLM | "2024-09-26T14:36:54Z" | 5 | 0 | null | [
"pytorch",
"opt",
"text-generation",
"en",
"dataset:databricks/databricks-dolly-15k",
"arxiv:2306.08543",
"base_model:facebook/opt-1.3b",
"base_model:finetune:facebook/opt-1.3b",
"license:apache-2.0",
"region:us"
] | text-generation | "2024-09-26T08:26:13Z" | ---
license: apache-2.0
datasets:
- databricks/databricks-dolly-15k
language:
- en
metrics:
- rouge
base_model:
- facebook/opt-1.3b
pipeline_tag: text-generation
---
# SFT-OPT-1.3B
[paper](https://arxiv.org/abs/2306.08543) | [code](https://github.com/microsoft/LMOps/tree/main/minillm)
**SFT-OPT-1.3B** is an OPT-1.3B model supervised fine-tuned on [databricks-dolly-15k](https://huggingface.co/datasets/aisquared/databricks-dolly-15k).
It is used as a baseline for [MiniLLM](https://huggingface.co/MiniLLM/MiniLLM-OPT-1.3B).
## Other Baselines
+ [KD](https://huggingface.co/MiniLLM/KD-OPT-1.3B)
+ [SeqKD](https://huggingface.co/MiniLLM/SeqKD-OPT-1.3B)
## Citation
```
@inproceedings{minillm,
title={MiniLLM: Knowledge Distillation of Large Language Models},
author={Gu, Yuxian and Dong, Li and Wei, Furu and Huang, Minlie},
booktitle={Proceedings of ICLR},
year={2024}
}
``` |
united-link/f5-tts-ami-finetune-with-ithuan-trv | united-link | "2025-04-07T06:30:37Z" | 0 | 0 | null | [
"text-to-speech",
"ami",
"trv",
"arxiv:2410.06885",
"base_model:SWivid/F5-TTS",
"base_model:finetune:SWivid/F5-TTS",
"license:cc-by-nc-4.0",
"region:us"
] | text-to-speech | "2025-02-17T02:43:59Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
gasmichel/UAR_Play | gasmichel | "2024-12-28T17:03:03Z" | 5 | 0 | null | [
"safetensors",
"LUAR",
"custom_code",
"en",
"license:apache-2.0",
"region:us"
] | null | "2024-10-03T16:07:27Z" | ---
language:
- en
license: apache-2.0
---
# UAR Play
Literary Character Representations using [UAR Play](https://aclanthology.org/2024.findings-emnlp.744/)., trained on fictional character utterances.
You can find the training and evaluation repository [here](https://github.com/deezer/character_embeddings_qa).
This model is based on [LUAR implementation](https://aclanthology.org/2021.emnlp-main.70/). It uses `all-distillroberta-v1` as the base sentence encoder and was trained on the Play split of [DramaCV](https://huggingface.co/datasets/gasmichel/DramaCV), a dataset consisting of drama plays collected from Project Gutenberg.
You can find the model trained on the Scene split at this [url](https://huggingface.co/gasmichel/UAR_scene).
## Usage
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("gasmichel/UAR_Play")
model = AutoModel.from_pretrained("gasmichel/UAR_Play")
#`episodes` are embedded as colletions of documents presumed to come from an author
# NOTE: make sure that `episode_length` consistent across `episode`
batch_size = 3
episode_length = 16
text = [
["Foo"] * episode_length,
["Bar"] * episode_length,
["Zoo"] * episode_length,
]
text = [j for i in text for j in i]
tokenized_text = tokenizer(
text,
max_length=32,
padding="max_length",
truncation=True,
return_tensors="pt"
)
# inputs size: (batch_size, episode_length, max_token_length)
tokenized_text["input_ids"] = tokenized_text["input_ids"].reshape(batch_size, episode_length, -1)
tokenized_text["attention_mask"] = tokenized_text["attention_mask"].reshape(batch_size, episode_length, -1)
print(tokenized_text["input_ids"].size()) # torch.Size([3, 16, 32])
print(tokenized_text["attention_mask"].size()) # torch.Size([3, 16, 32])
out = model(**tokenized_text)
print(out.size()) # torch.Size([3, 512])
# to get the Transformer attentions:
out, attentions = model(**tokenized_text, output_attentions=True)
print(attentions[0].size()) # torch.Size([48, 12, 32, 32])
```
## Citing & Authors
If you find this model helpful, feel free to cite our [publication](https://aclanthology.org/2024.findings-emnlp.744/).
```
@inproceedings{michel-etal-2024-improving,
title = "Improving Quotation Attribution with Fictional Character Embeddings",
author = "Michel, Gaspard and
Epure, Elena V. and
Hennequin, Romain and
Cerisara, Christophe",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2024",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-emnlp.744",
doi = "10.18653/v1/2024.findings-emnlp.744",
pages = "12723--12735",,
}
```
## License
UAR Scene is distributed under the terms of the Apache License (Version 2.0).
All new contributions must be made under the Apache-2.0 licenses. |
mradermacher/AlphaMaze-v0.2-1.5B-SFT-GGUF | mradermacher | "2025-03-26T01:13:14Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:jan-hq/AlphaMaze-v0.2-1.5B-SFT",
"base_model:quantized:jan-hq/AlphaMaze-v0.2-1.5B-SFT",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-26T00:58:21Z" | ---
base_model: jan-hq/AlphaMaze-v0.2-1.5B-SFT
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/jan-hq/AlphaMaze-v0.2-1.5B-SFT
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/AlphaMaze-v0.2-1.5B-SFT-GGUF/resolve/main/AlphaMaze-v0.2-1.5B-SFT.Q2_K.gguf) | Q2_K | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/AlphaMaze-v0.2-1.5B-SFT-GGUF/resolve/main/AlphaMaze-v0.2-1.5B-SFT.Q3_K_S.gguf) | Q3_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/AlphaMaze-v0.2-1.5B-SFT-GGUF/resolve/main/AlphaMaze-v0.2-1.5B-SFT.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/AlphaMaze-v0.2-1.5B-SFT-GGUF/resolve/main/AlphaMaze-v0.2-1.5B-SFT.Q3_K_L.gguf) | Q3_K_L | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/AlphaMaze-v0.2-1.5B-SFT-GGUF/resolve/main/AlphaMaze-v0.2-1.5B-SFT.IQ4_XS.gguf) | IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/AlphaMaze-v0.2-1.5B-SFT-GGUF/resolve/main/AlphaMaze-v0.2-1.5B-SFT.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AlphaMaze-v0.2-1.5B-SFT-GGUF/resolve/main/AlphaMaze-v0.2-1.5B-SFT.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AlphaMaze-v0.2-1.5B-SFT-GGUF/resolve/main/AlphaMaze-v0.2-1.5B-SFT.Q5_K_S.gguf) | Q5_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/AlphaMaze-v0.2-1.5B-SFT-GGUF/resolve/main/AlphaMaze-v0.2-1.5B-SFT.Q5_K_M.gguf) | Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/AlphaMaze-v0.2-1.5B-SFT-GGUF/resolve/main/AlphaMaze-v0.2-1.5B-SFT.Q6_K.gguf) | Q6_K | 1.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/AlphaMaze-v0.2-1.5B-SFT-GGUF/resolve/main/AlphaMaze-v0.2-1.5B-SFT.Q8_0.gguf) | Q8_0 | 2.0 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/AlphaMaze-v0.2-1.5B-SFT-GGUF/resolve/main/AlphaMaze-v0.2-1.5B-SFT.f16.gguf) | f16 | 3.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
bingliangzhang00/STeP-blackhole | bingliangzhang00 | "2025-04-10T01:17:29Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-10T01:13:05Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
mradermacher/MeliodasPercival_01_PasticheInex12-GGUF | mradermacher | "2024-12-28T10:52:25Z" | 12 | 0 | transformers | [
"transformers",
"gguf",
"Safetensors",
"text-generation-inference",
"merge",
"en",
"base_model:MaziyarPanahi/MeliodasPercival_01_PasticheInex12",
"base_model:quantized:MaziyarPanahi/MeliodasPercival_01_PasticheInex12",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-12-28T10:43:18Z" | ---
base_model: MaziyarPanahi/MeliodasPercival_01_PasticheInex12
language:
- en
library_name: transformers
license: apache-2.0
model_creator: MaziyarPanahi
model_name: MeliodasPercival_01_PasticheInex12
quantized_by: mradermacher
tags:
- Safetensors
- text-generation-inference
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/MaziyarPanahi/MeliodasPercival_01_PasticheInex12
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MeliodasPercival_01_PasticheInex12-GGUF/resolve/main/MeliodasPercival_01_PasticheInex12.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/MeliodasPercival_01_PasticheInex12-GGUF/resolve/main/MeliodasPercival_01_PasticheInex12.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/MeliodasPercival_01_PasticheInex12-GGUF/resolve/main/MeliodasPercival_01_PasticheInex12.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MeliodasPercival_01_PasticheInex12-GGUF/resolve/main/MeliodasPercival_01_PasticheInex12.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/MeliodasPercival_01_PasticheInex12-GGUF/resolve/main/MeliodasPercival_01_PasticheInex12.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/MeliodasPercival_01_PasticheInex12-GGUF/resolve/main/MeliodasPercival_01_PasticheInex12.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MeliodasPercival_01_PasticheInex12-GGUF/resolve/main/MeliodasPercival_01_PasticheInex12.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MeliodasPercival_01_PasticheInex12-GGUF/resolve/main/MeliodasPercival_01_PasticheInex12.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/MeliodasPercival_01_PasticheInex12-GGUF/resolve/main/MeliodasPercival_01_PasticheInex12.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/MeliodasPercival_01_PasticheInex12-GGUF/resolve/main/MeliodasPercival_01_PasticheInex12.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MeliodasPercival_01_PasticheInex12-GGUF/resolve/main/MeliodasPercival_01_PasticheInex12.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MeliodasPercival_01_PasticheInex12-GGUF/resolve/main/MeliodasPercival_01_PasticheInex12.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
MaziyarPanahi/mathstral-7B-v0.1-GGUF | MaziyarPanahi | "2024-07-16T16:54:49Z" | 946,598 | 7 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:mistralai/Mathstral-7B-v0.1",
"base_model:quantized:mistralai/Mathstral-7B-v0.1",
"region:us"
] | text-generation | "2024-07-16T15:06:23Z" | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: mathstral-7B-v0.1-GGUF
base_model: mistralai/mathstral-7B-v0.1
inference: false
model_creator: mistralai
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/mathstral-7B-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/mathstral-7B-v0.1-GGUF)
- Model creator: [mistralai](https://huggingface.co/mistralai)
- Original model: [mistralai/mathstral-7B-v0.1](https://huggingface.co/mistralai/mathstral-7B-v0.1)
## Description
[MaziyarPanahi/mathstral-7B-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/mathstral-7B-v0.1-GGUF) contains GGUF format model files for [mistralai/mathstral-7B-v0.1](https://huggingface.co/mistralai/mathstral-7B-v0.1).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
---
**Original README**
# Model Card for Mathstral-7B-v0.1
Mathstral 7B is a model specializing in mathematical and scientific tasks, based on Mistral 7B.
You can read more in the [official blog post](https://mistral.ai/news/mathstral/).
## Installation
It is recommended to use `mistralai/mathstral-7B-v0.1` with [mistral-inference](https://github.com/mistralai/mistral-inference)
```
pip install mistral_inference>=1.2.0
```
## Download
```py
from huggingface_hub import snapshot_download
from pathlib import Path
mistral_models_path = Path.home().joinpath('mistral_models', 'mathstral-7B-v0.1')
mistral_models_path.mkdir(parents=True, exist_ok=True)
snapshot_download(repo_id="mistralai/mathstral-7B-v0.1", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path)
```
### Chat
After installing `mistral_inference`, a `mistral-demo` CLI command should be available in your environment.
```
mistral-chat $HOME/mistral_models/mathstral-7B-v0.1 --instruct --max_tokens 256
```
You can then start chatting with the model, *e.g.* prompt it with something like:
*"Albert likes to surf every week. Each surfing session lasts for 4 hours and costs $20 per hour. How much would Albert spend in 5 weeks?"*
## Evaluation
We evaluate Mathstral 7B and open-weight models of the similar size on industry-standard benchmarks.
| Benchmarks | MATH | GSM8K (8-shot) | Odyssey Math maj@16 | GRE Math maj@16 | AMC 2023 maj@16 | AIME 2024 maj@16
| :--- | :---: | :---: | :---: | :---: | :---: | :---: |
| Mathstral 7B | **56.6** | 77.1 | **37.2** | 56.9 | **42.4** | **2/30** |
| DeepSeek Math 7B | 44.4 | **80.6** | 27.6 | 44.6 | 28.0 | 0/30 |
| Llama3 8B | 28.4 | 75.4 | 24.0 | 26.2 | 34.4 | 0/30 |
| GLM4 9B | 50.2 | 48.8 | 18.9 | 46.2 | 36.0 | 1/30 |
| QWen2 7B | **56.8** | 32.7 | 24.8 | **58.5** | 35.2 | **2/30** |
| Gemma2 9B | 48.3 | 69.5 | 18.6 | 52.3 | 31.2 | 1/30 |
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Alok Kothari, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Augustin Garreau, Austin Birky, Bam4d, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Carole Rambaud, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gaspard Blanchet, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Henri Roussez, Hichem Sattouf, Ian Mack, Jean-Malo Delignon, Jessica Chudnovsky, Justus Murke, Kartik Khandelwal, Lawrence Stewart, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Marjorie Janiewicz, Mickaël Seznec, Nicolas Schuhl, Niklas Muhs, Olivier de Garrigues, Patrick von Platen, Paul Jacob, Pauline Buche, Pavan Kumar Reddy, Perry Savas, Pierre Stock, Romain Sauvestre, Sagar Vaze, Sandeep Subramanian, Saurabh Garg, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibault Schueller, Thibaut Lavril, Thomas Wang, Théophile Gervet, Timothée Lacroix, Valera Nemychnikova, Wendy Shang, William El Sayed, William Marshall |
robiual-awal/eb95c252-523a-417b-887f-8559933bbaaf | robiual-awal | "2025-02-08T21:39:45Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:JackFram/llama-68m",
"base_model:adapter:JackFram/llama-68m",
"license:apache-2.0",
"region:us"
] | null | "2025-02-08T21:37:07Z" | ---
library_name: peft
license: apache-2.0
base_model: JackFram/llama-68m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: eb95c252-523a-417b-887f-8559933bbaaf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
# eb95c252-523a-417b-887f-8559933bbaaf
This model is a fine-tuned version of [JackFram/llama-68m](https://huggingface.co/JackFram/llama-68m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0253
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
PrunaAI/codellama-CodeLlama-13b-Instruct-hf-HQQ-4bit-smashed | PrunaAI | "2024-08-02T16:04:00Z" | 7 | 0 | transformers | [
"transformers",
"llama",
"text-generation",
"pruna-ai",
"conversational",
"base_model:codellama/CodeLlama-13b-Instruct-hf",
"base_model:finetune:codellama/CodeLlama-13b-Instruct-hf",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-17T22:48:15Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: codellama/CodeLlama-13b-Instruct-hf
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo codellama/CodeLlama-13b-Instruct-hf installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/codellama-CodeLlama-13b-Instruct-hf-HQQ-4bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/codellama-CodeLlama-13b-Instruct-hf-HQQ-4bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("codellama/CodeLlama-13b-Instruct-hf")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model codellama/CodeLlama-13b-Instruct-hf before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
zaow/q-Taxi-v3-actionmask | zaow | "2024-06-10T06:03:28Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-06-10T06:03:26Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-actionmask
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="zaow/q-Taxi-v3-actionmask", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
SCANSKY/BERTopic-Tourism-Hindi | SCANSKY | "2025-02-28T16:35:11Z" | 4 | 0 | bertopic | [
"bertopic",
"tourism",
"topicmodelling",
"hindi",
"text-classification",
"hi",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-02-06T13:41:02Z" | ---
language:
- hi
pipeline_tag: text-classification
library_name: bertopic
tags:
- tourism
- topicmodelling
- hindi
--- |
jhamel/model-1 | jhamel | "2024-03-31T20:14:19Z" | 1 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-31T20:08:12Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
base_model: unsloth/mistral-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** jhamel
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
rvergara2017/dpo_llama-3.2-1B-tldr | rvergara2017 | "2024-12-13T18:19:47Z" | 150 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-13T18:10:54Z" | ---
library_name: transformers
model_name: dpo_llama-3.2-1B-tldr
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for dpo_llama-3.2-1B-tldr
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="rvergara2017/dpo_llama-3.2-1B-tldr", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/rvergara2017-universidad-de-concepcion/policy-model/runs/mtu0lmch)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.1
- Transformers: 4.46.2
- Pytorch: 2.4.1
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
datlaaaaaaa/8f476975-663a-4034-aa58-64f8c887bcf3 | datlaaaaaaa | "2025-02-01T14:49:27Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Coder-7B",
"base_model:adapter:unsloth/Qwen2.5-Coder-7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-01T13:50:14Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Coder-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8f476975-663a-4034-aa58-64f8c887bcf3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Coder-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 70dd2715c06e09e4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/70dd2715c06e09e4_train_data.json
type:
field_input: ''
field_instruction: problem
field_output: qwq
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: datlaaaaaaa/8f476975-663a-4034-aa58-64f8c887bcf3
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/70dd2715c06e09e4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4d3c3854-d836-45a3-94c1-95acfa702e49
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4d3c3854-d836-45a3-94c1-95acfa702e49
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 8f476975-663a-4034-aa58-64f8c887bcf3
This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-7B](https://huggingface.co/unsloth/Qwen2.5-Coder-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4534
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4093 | 0.0127 | 200 | 0.4534 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
SarthakM320/q-FrozenLake-v1-8x8-noSlippery | SarthakM320 | "2023-12-16T09:32:33Z" | 0 | 0 | null | [
"FrozenLake-v1-8x8-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-12-16T09:15:24Z" | ---
tags:
- FrozenLake-v1-8x8-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-8x8-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8-no_slippery
type: FrozenLake-v1-8x8-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="SarthakM320/q-FrozenLake-v1-8x8-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
hqbui/Reinforce-PixelCopter-PLE-v0 | hqbui | "2023-12-12T21:02:22Z" | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-12-12T19:11:38Z" | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 23.70 +/- 17.46
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
imvladikon/sentence-transformers-alephbert | imvladikon | "2023-04-06T15:21:09Z" | 45,165 | 7 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"he",
"arxiv:2104.04052",
"arxiv:1908.10084",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2023-04-04T07:57:25Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
language:
- he
library_name: sentence-transformers
---
# imvladikon/sentence-transformers-alephbert[WIP]
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
Current version is distillation of the [LaBSE](https://huggingface.co/sentence-transformers/LaBSE) model on private corpus.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
sentences = [
"הם היו שמחים לראות את האירוע שהתקיים.",
"לראות את האירוע שהתקיים היה מאוד משמח להם."
]
model = SentenceTransformer('imvladikon/sentence-transformers-alephbert')
embeddings = model.encode(sentences)
print(cos_sim(*tuple(embeddings)).item())
# 0.883316159248352
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
import torch
from torch import nn
from transformers import AutoTokenizer, AutoModel
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = [
"הם היו שמחים לראות את האירוע שהתקיים.",
"לראות את האירוע שהתקיים היה מאוד משמח להם."
]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('imvladikon/sentence-transformers-alephbert')
model = AutoModel.from_pretrained('imvladikon/sentence-transformers-alephbert')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
cos_sim = nn.CosineSimilarity(dim=0, eps=1e-6)
print(cos_sim(sentence_embeddings[0], sentence_embeddings[1]).item())
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 44999 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 44999,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
```bibtex
@misc{seker2021alephberta,
title={AlephBERT:A Hebrew Large Pre-Trained Language Model to Start-off your Hebrew NLP Application With},
author={Amit Seker and Elron Bandel and Dan Bareket and Idan Brusilovsky and Refael Shaked Greenfeld and Reut Tsarfaty},
year={2021},
eprint={2104.04052},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@misc{reimers2019sentencebert,
title={Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks},
author={Nils Reimers and Iryna Gurevych},
year={2019},
eprint={1908.10084},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
MayBashendy/ArabicNewSplits7_OSS_usingWellWrittenEssays_FineTuningAraBERT_run3_AugV5_k17_task1_organization | MayBashendy | "2025-01-16T01:03:09Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-16T00:52:48Z" | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_OSS_usingWellWrittenEssays_FineTuningAraBERT_run3_AugV5_k17_task1_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_OSS_usingWellWrittenEssays_FineTuningAraBERT_run3_AugV5_k17_task1_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8322
- Qwk: 0.6412
- Mse: 0.8322
- Rmse: 0.9123
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 0.0157 | 2 | 6.7883 | 0.0358 | 6.7883 | 2.6054 |
| No log | 0.0315 | 4 | 4.8413 | 0.0682 | 4.8413 | 2.2003 |
| No log | 0.0472 | 6 | 3.0603 | 0.0848 | 3.0603 | 1.7494 |
| No log | 0.0630 | 8 | 2.7090 | 0.1060 | 2.7090 | 1.6459 |
| No log | 0.0787 | 10 | 2.1067 | 0.1515 | 2.1067 | 1.4515 |
| No log | 0.0945 | 12 | 1.5891 | 0.1154 | 1.5891 | 1.2606 |
| No log | 0.1102 | 14 | 1.6815 | 0.1538 | 1.6815 | 1.2967 |
| No log | 0.1260 | 16 | 1.7015 | 0.2018 | 1.7015 | 1.3044 |
| No log | 0.1417 | 18 | 1.7046 | 0.1495 | 1.7046 | 1.3056 |
| No log | 0.1575 | 20 | 1.7080 | 0.2000 | 1.7080 | 1.3069 |
| No log | 0.1732 | 22 | 1.5865 | 0.3509 | 1.5865 | 1.2596 |
| No log | 0.1890 | 24 | 1.3117 | 0.4762 | 1.3117 | 1.1453 |
| No log | 0.2047 | 26 | 1.3973 | 0.4203 | 1.3973 | 1.1821 |
| No log | 0.2205 | 28 | 2.1031 | 0.3509 | 2.1031 | 1.4502 |
| No log | 0.2362 | 30 | 1.7712 | 0.4255 | 1.7712 | 1.3309 |
| No log | 0.2520 | 32 | 1.3228 | 0.3898 | 1.3228 | 1.1501 |
| No log | 0.2677 | 34 | 1.4996 | 0.3306 | 1.4996 | 1.2246 |
| No log | 0.2835 | 36 | 1.6272 | 0.1880 | 1.6272 | 1.2756 |
| No log | 0.2992 | 38 | 1.6205 | 0.1754 | 1.6205 | 1.2730 |
| No log | 0.3150 | 40 | 1.4026 | 0.3717 | 1.4026 | 1.1843 |
| No log | 0.3307 | 42 | 1.2039 | 0.5203 | 1.2039 | 1.0972 |
| No log | 0.3465 | 44 | 1.0494 | 0.5854 | 1.0494 | 1.0244 |
| No log | 0.3622 | 46 | 0.9865 | 0.5210 | 0.9865 | 0.9932 |
| No log | 0.3780 | 48 | 0.9417 | 0.5410 | 0.9417 | 0.9704 |
| No log | 0.3937 | 50 | 0.9620 | 0.576 | 0.9620 | 0.9808 |
| No log | 0.4094 | 52 | 1.0730 | 0.5714 | 1.0730 | 1.0359 |
| No log | 0.4252 | 54 | 1.1320 | 0.5581 | 1.1320 | 1.0639 |
| No log | 0.4409 | 56 | 1.1305 | 0.6 | 1.1305 | 1.0633 |
| No log | 0.4567 | 58 | 1.0333 | 0.6712 | 1.0333 | 1.0165 |
| No log | 0.4724 | 60 | 0.8163 | 0.6897 | 0.8163 | 0.9035 |
| No log | 0.4882 | 62 | 0.7448 | 0.7162 | 0.7448 | 0.8630 |
| No log | 0.5039 | 64 | 0.7328 | 0.7083 | 0.7328 | 0.8561 |
| No log | 0.5197 | 66 | 0.7433 | 0.6849 | 0.7433 | 0.8622 |
| No log | 0.5354 | 68 | 0.7711 | 0.6986 | 0.7711 | 0.8781 |
| No log | 0.5512 | 70 | 0.8266 | 0.6812 | 0.8266 | 0.9092 |
| No log | 0.5669 | 72 | 1.0364 | 0.6015 | 1.0364 | 1.0180 |
| No log | 0.5827 | 74 | 1.3931 | 0.4720 | 1.3931 | 1.1803 |
| No log | 0.5984 | 76 | 1.4228 | 0.5029 | 1.4228 | 1.1928 |
| No log | 0.6142 | 78 | 1.0851 | 0.6832 | 1.0851 | 1.0417 |
| No log | 0.6299 | 80 | 0.7439 | 0.7027 | 0.7439 | 0.8625 |
| No log | 0.6457 | 82 | 0.7365 | 0.6980 | 0.7365 | 0.8582 |
| No log | 0.6614 | 84 | 0.8055 | 0.6883 | 0.8055 | 0.8975 |
| No log | 0.6772 | 86 | 0.7863 | 0.7467 | 0.7863 | 0.8868 |
| No log | 0.6929 | 88 | 0.6867 | 0.7397 | 0.6867 | 0.8287 |
| No log | 0.7087 | 90 | 0.7795 | 0.6806 | 0.7795 | 0.8829 |
| No log | 0.7244 | 92 | 0.9020 | 0.7105 | 0.9020 | 0.9497 |
| No log | 0.7402 | 94 | 1.0885 | 0.6333 | 1.0885 | 1.0433 |
| No log | 0.7559 | 96 | 1.2807 | 0.6264 | 1.2807 | 1.1317 |
| No log | 0.7717 | 98 | 1.1752 | 0.5890 | 1.1752 | 1.0840 |
| No log | 0.7874 | 100 | 0.9266 | 0.6294 | 0.9266 | 0.9626 |
| No log | 0.8031 | 102 | 0.8045 | 0.6620 | 0.8045 | 0.8969 |
| No log | 0.8189 | 104 | 0.7592 | 0.7114 | 0.7592 | 0.8713 |
| No log | 0.8346 | 106 | 0.7171 | 0.7355 | 0.7171 | 0.8468 |
| No log | 0.8504 | 108 | 0.6534 | 0.7516 | 0.6534 | 0.8083 |
| No log | 0.8661 | 110 | 0.7070 | 0.7515 | 0.7070 | 0.8408 |
| No log | 0.8819 | 112 | 1.1652 | 0.6222 | 1.1652 | 1.0794 |
| No log | 0.8976 | 114 | 1.5356 | 0.5172 | 1.5356 | 1.2392 |
| No log | 0.9134 | 116 | 1.5614 | 0.4342 | 1.5614 | 1.2495 |
| No log | 0.9291 | 118 | 1.4097 | 0.4626 | 1.4097 | 1.1873 |
| No log | 0.9449 | 120 | 1.0525 | 0.6165 | 1.0525 | 1.0259 |
| No log | 0.9606 | 122 | 0.8510 | 0.6857 | 0.8510 | 0.9225 |
| No log | 0.9764 | 124 | 0.8067 | 0.7143 | 0.8067 | 0.8982 |
| No log | 0.9921 | 126 | 0.8334 | 0.7050 | 0.8334 | 0.9129 |
| No log | 1.0079 | 128 | 0.8988 | 0.6087 | 0.8988 | 0.9480 |
| No log | 1.0236 | 130 | 0.9679 | 0.5874 | 0.9679 | 0.9838 |
| No log | 1.0394 | 132 | 0.9865 | 0.6144 | 0.9865 | 0.9932 |
| No log | 1.0551 | 134 | 0.8222 | 0.6573 | 0.8222 | 0.9068 |
| No log | 1.0709 | 136 | 0.6885 | 0.6809 | 0.6885 | 0.8298 |
| No log | 1.0866 | 138 | 0.7159 | 0.6763 | 0.7159 | 0.8461 |
| No log | 1.1024 | 140 | 0.7610 | 0.6957 | 0.7610 | 0.8723 |
| No log | 1.1181 | 142 | 0.8494 | 0.6620 | 0.8494 | 0.9216 |
| No log | 1.1339 | 144 | 1.0654 | 0.6144 | 1.0654 | 1.0322 |
| No log | 1.1496 | 146 | 1.2582 | 0.5542 | 1.2582 | 1.1217 |
| No log | 1.1654 | 148 | 1.2743 | 0.4906 | 1.2743 | 1.1289 |
| No log | 1.1811 | 150 | 1.2891 | 0.5031 | 1.2891 | 1.1354 |
| No log | 1.1969 | 152 | 1.2260 | 0.5270 | 1.2260 | 1.1073 |
| No log | 1.2126 | 154 | 1.1048 | 0.5294 | 1.1048 | 1.0511 |
| No log | 1.2283 | 156 | 1.0799 | 0.5612 | 1.0799 | 1.0392 |
| No log | 1.2441 | 158 | 0.9890 | 0.5714 | 0.9890 | 0.9945 |
| No log | 1.2598 | 160 | 0.8318 | 0.6438 | 0.8318 | 0.9120 |
| No log | 1.2756 | 162 | 0.7045 | 0.7368 | 0.7045 | 0.8393 |
| No log | 1.2913 | 164 | 0.6566 | 0.7448 | 0.6566 | 0.8103 |
| No log | 1.3071 | 166 | 0.6554 | 0.7211 | 0.6554 | 0.8096 |
| No log | 1.3228 | 168 | 0.6968 | 0.7075 | 0.6968 | 0.8347 |
| No log | 1.3386 | 170 | 0.7736 | 0.7020 | 0.7736 | 0.8795 |
| No log | 1.3543 | 172 | 0.9850 | 0.6076 | 0.9850 | 0.9925 |
| No log | 1.3701 | 174 | 1.1904 | 0.6199 | 1.1904 | 1.0910 |
| No log | 1.3858 | 176 | 1.2922 | 0.6082 | 1.2922 | 1.1367 |
| No log | 1.4016 | 178 | 1.1966 | 0.5939 | 1.1966 | 1.0939 |
| No log | 1.4173 | 180 | 1.0237 | 0.6289 | 1.0237 | 1.0118 |
| No log | 1.4331 | 182 | 0.9033 | 0.6294 | 0.9033 | 0.9504 |
| No log | 1.4488 | 184 | 0.8090 | 0.6950 | 0.8090 | 0.8994 |
| No log | 1.4646 | 186 | 0.8016 | 0.7042 | 0.8016 | 0.8953 |
| No log | 1.4803 | 188 | 0.8022 | 0.7 | 0.8022 | 0.8957 |
| No log | 1.4961 | 190 | 0.8267 | 0.6525 | 0.8267 | 0.9092 |
| No log | 1.5118 | 192 | 0.8759 | 0.6216 | 0.8759 | 0.9359 |
| No log | 1.5276 | 194 | 0.8712 | 0.6620 | 0.8712 | 0.9334 |
| No log | 1.5433 | 196 | 0.9673 | 0.5714 | 0.9673 | 0.9835 |
| No log | 1.5591 | 198 | 1.0765 | 0.5414 | 1.0765 | 1.0375 |
| No log | 1.5748 | 200 | 1.0594 | 0.5414 | 1.0594 | 1.0293 |
| No log | 1.5906 | 202 | 0.8837 | 0.6176 | 0.8837 | 0.9400 |
| No log | 1.6063 | 204 | 0.7319 | 0.7234 | 0.7319 | 0.8555 |
| No log | 1.6220 | 206 | 0.7000 | 0.7234 | 0.7000 | 0.8366 |
| No log | 1.6378 | 208 | 0.7127 | 0.7172 | 0.7127 | 0.8442 |
| No log | 1.6535 | 210 | 0.7995 | 0.6667 | 0.7995 | 0.8942 |
| No log | 1.6693 | 212 | 0.8171 | 0.6712 | 0.8171 | 0.9039 |
| No log | 1.6850 | 214 | 0.8432 | 0.6577 | 0.8432 | 0.9182 |
| No log | 1.7008 | 216 | 0.8680 | 0.6483 | 0.8680 | 0.9317 |
| No log | 1.7165 | 218 | 0.8630 | 0.6475 | 0.8630 | 0.9290 |
| No log | 1.7323 | 220 | 0.8618 | 0.6475 | 0.8618 | 0.9283 |
| No log | 1.7480 | 222 | 0.8799 | 0.6370 | 0.8799 | 0.9381 |
| No log | 1.7638 | 224 | 0.8898 | 0.6165 | 0.8898 | 0.9433 |
| No log | 1.7795 | 226 | 0.9304 | 0.5802 | 0.9304 | 0.9646 |
| No log | 1.7953 | 228 | 0.9096 | 0.6316 | 0.9096 | 0.9537 |
| No log | 1.8110 | 230 | 0.8310 | 0.6812 | 0.8310 | 0.9116 |
| No log | 1.8268 | 232 | 0.7485 | 0.7273 | 0.7485 | 0.8651 |
| No log | 1.8425 | 234 | 0.7160 | 0.7682 | 0.7160 | 0.8462 |
| No log | 1.8583 | 236 | 0.7355 | 0.7564 | 0.7355 | 0.8576 |
| No log | 1.8740 | 238 | 0.8414 | 0.6988 | 0.8414 | 0.9173 |
| No log | 1.8898 | 240 | 1.0296 | 0.6889 | 1.0296 | 1.0147 |
| No log | 1.9055 | 242 | 1.1241 | 0.6667 | 1.1241 | 1.0602 |
| No log | 1.9213 | 244 | 1.0324 | 0.6667 | 1.0324 | 1.0160 |
| No log | 1.9370 | 246 | 0.8550 | 0.7020 | 0.8550 | 0.9247 |
| No log | 1.9528 | 248 | 0.7741 | 0.7211 | 0.7741 | 0.8798 |
| No log | 1.9685 | 250 | 0.7635 | 0.7183 | 0.7635 | 0.8738 |
| No log | 1.9843 | 252 | 0.7662 | 0.7324 | 0.7662 | 0.8753 |
| No log | 2.0 | 254 | 0.7572 | 0.7324 | 0.7572 | 0.8702 |
| No log | 2.0157 | 256 | 0.8347 | 0.6525 | 0.8347 | 0.9136 |
| No log | 2.0315 | 258 | 0.9941 | 0.5833 | 0.9941 | 0.9971 |
| No log | 2.0472 | 260 | 1.0738 | 0.5867 | 1.0738 | 1.0362 |
| No log | 2.0630 | 262 | 0.9629 | 0.6241 | 0.9629 | 0.9813 |
| No log | 2.0787 | 264 | 0.8544 | 0.6462 | 0.8544 | 0.9243 |
| No log | 2.0945 | 266 | 0.8538 | 0.6406 | 0.8538 | 0.9240 |
| No log | 2.1102 | 268 | 0.8970 | 0.6562 | 0.8970 | 0.9471 |
| No log | 2.1260 | 270 | 0.8858 | 0.6457 | 0.8858 | 0.9412 |
| No log | 2.1417 | 272 | 0.9280 | 0.6308 | 0.9280 | 0.9633 |
| No log | 2.1575 | 274 | 1.0649 | 0.5775 | 1.0649 | 1.0319 |
| No log | 2.1732 | 276 | 1.2150 | 0.5679 | 1.2150 | 1.1023 |
| No log | 2.1890 | 278 | 1.2350 | 0.5988 | 1.2350 | 1.1113 |
| No log | 2.2047 | 280 | 1.0419 | 0.6460 | 1.0419 | 1.0207 |
| No log | 2.2205 | 282 | 0.8691 | 0.6667 | 0.8691 | 0.9322 |
| No log | 2.2362 | 284 | 0.8199 | 0.6165 | 0.8199 | 0.9055 |
| No log | 2.2520 | 286 | 0.8654 | 0.6269 | 0.8654 | 0.9303 |
| No log | 2.2677 | 288 | 0.9133 | 0.6370 | 0.9133 | 0.9556 |
| No log | 2.2835 | 290 | 0.9509 | 0.5672 | 0.9509 | 0.9752 |
| No log | 2.2992 | 292 | 1.0237 | 0.6 | 1.0237 | 1.0118 |
| No log | 2.3150 | 294 | 1.1310 | 0.5655 | 1.1310 | 1.0635 |
| No log | 2.3307 | 296 | 1.0935 | 0.5860 | 1.0935 | 1.0457 |
| No log | 2.3465 | 298 | 0.9171 | 0.7 | 0.9171 | 0.9577 |
| No log | 2.3622 | 300 | 0.7525 | 0.7333 | 0.7525 | 0.8675 |
| No log | 2.3780 | 302 | 0.6545 | 0.7448 | 0.6545 | 0.8090 |
| No log | 2.3937 | 304 | 0.6977 | 0.7092 | 0.6977 | 0.8353 |
| No log | 2.4094 | 306 | 0.7401 | 0.6944 | 0.7401 | 0.8603 |
| No log | 2.4252 | 308 | 0.6977 | 0.7273 | 0.6977 | 0.8353 |
| No log | 2.4409 | 310 | 0.7241 | 0.6861 | 0.7241 | 0.8510 |
| No log | 2.4567 | 312 | 0.8410 | 0.6370 | 0.8410 | 0.9171 |
| No log | 2.4724 | 314 | 0.9976 | 0.5942 | 0.9976 | 0.9988 |
| No log | 2.4882 | 316 | 1.1640 | 0.5217 | 1.1640 | 1.0789 |
| No log | 2.5039 | 318 | 1.2758 | 0.5180 | 1.2758 | 1.1295 |
| No log | 2.5197 | 320 | 1.2503 | 0.5217 | 1.2503 | 1.1182 |
| No log | 2.5354 | 322 | 1.1017 | 0.5263 | 1.1017 | 1.0496 |
| No log | 2.5512 | 324 | 0.9879 | 0.5758 | 0.9879 | 0.9939 |
| No log | 2.5669 | 326 | 0.9232 | 0.5802 | 0.9232 | 0.9608 |
| No log | 2.5827 | 328 | 0.8684 | 0.6212 | 0.8684 | 0.9319 |
| No log | 2.5984 | 330 | 0.8940 | 0.6331 | 0.8940 | 0.9455 |
| No log | 2.6142 | 332 | 1.0072 | 0.56 | 1.0072 | 1.0036 |
| No log | 2.6299 | 334 | 1.1227 | 0.5655 | 1.1227 | 1.0596 |
| No log | 2.6457 | 336 | 1.1617 | 0.5109 | 1.1617 | 1.0778 |
| No log | 2.6614 | 338 | 1.1084 | 0.5294 | 1.1084 | 1.0528 |
| No log | 2.6772 | 340 | 0.9846 | 0.6029 | 0.9846 | 0.9923 |
| No log | 2.6929 | 342 | 0.8796 | 0.6519 | 0.8796 | 0.9379 |
| No log | 2.7087 | 344 | 0.8284 | 0.6569 | 0.8284 | 0.9102 |
| No log | 2.7244 | 346 | 0.8616 | 0.6519 | 0.8616 | 0.9282 |
| No log | 2.7402 | 348 | 0.8292 | 0.6866 | 0.8292 | 0.9106 |
| No log | 2.7559 | 350 | 0.8039 | 0.6866 | 0.8039 | 0.8966 |
| No log | 2.7717 | 352 | 0.7827 | 0.6866 | 0.7827 | 0.8847 |
| No log | 2.7874 | 354 | 0.7879 | 0.6901 | 0.7879 | 0.8876 |
| No log | 2.8031 | 356 | 0.7928 | 0.7152 | 0.7928 | 0.8904 |
| No log | 2.8189 | 358 | 0.8114 | 0.6957 | 0.8114 | 0.9008 |
| No log | 2.8346 | 360 | 0.8070 | 0.7337 | 0.8070 | 0.8983 |
| No log | 2.8504 | 362 | 0.7488 | 0.6939 | 0.7488 | 0.8654 |
| No log | 2.8661 | 364 | 0.7206 | 0.7222 | 0.7206 | 0.8489 |
| No log | 2.8819 | 366 | 0.7044 | 0.7465 | 0.7044 | 0.8393 |
| No log | 2.8976 | 368 | 0.7462 | 0.7183 | 0.7462 | 0.8638 |
| No log | 2.9134 | 370 | 0.8933 | 0.6286 | 0.8933 | 0.9451 |
| No log | 2.9291 | 372 | 1.0770 | 0.5369 | 1.0770 | 1.0378 |
| No log | 2.9449 | 374 | 1.0882 | 0.5548 | 1.0882 | 1.0432 |
| No log | 2.9606 | 376 | 1.0530 | 0.5556 | 1.0530 | 1.0261 |
| No log | 2.9764 | 378 | 0.9137 | 0.6423 | 0.9137 | 0.9559 |
| No log | 2.9921 | 380 | 0.8360 | 0.6906 | 0.8360 | 0.9143 |
| No log | 3.0079 | 382 | 0.8257 | 0.6957 | 0.8257 | 0.9087 |
| No log | 3.0236 | 384 | 0.7622 | 0.7143 | 0.7622 | 0.8730 |
| No log | 3.0394 | 386 | 0.7316 | 0.7286 | 0.7316 | 0.8554 |
| No log | 3.0551 | 388 | 0.8324 | 0.6923 | 0.8324 | 0.9123 |
| No log | 3.0709 | 390 | 0.9353 | 0.7030 | 0.9353 | 0.9671 |
| No log | 3.0866 | 392 | 0.9516 | 0.6626 | 0.9516 | 0.9755 |
| No log | 3.1024 | 394 | 0.8743 | 0.625 | 0.8743 | 0.9350 |
| No log | 3.1181 | 396 | 0.7965 | 0.6815 | 0.7965 | 0.8924 |
| No log | 3.1339 | 398 | 0.7930 | 0.6963 | 0.7930 | 0.8905 |
| No log | 3.1496 | 400 | 0.8154 | 0.6963 | 0.8154 | 0.9030 |
| No log | 3.1654 | 402 | 0.8817 | 0.6716 | 0.8817 | 0.9390 |
| No log | 3.1811 | 404 | 0.9866 | 0.6043 | 0.9866 | 0.9933 |
| No log | 3.1969 | 406 | 1.0203 | 0.5915 | 1.0203 | 1.0101 |
| No log | 3.2126 | 408 | 0.9699 | 0.6176 | 0.9699 | 0.9849 |
| No log | 3.2283 | 410 | 0.8889 | 0.6567 | 0.8889 | 0.9428 |
| No log | 3.2441 | 412 | 0.8714 | 0.6667 | 0.8714 | 0.9335 |
| No log | 3.2598 | 414 | 0.9392 | 0.6490 | 0.9392 | 0.9691 |
| No log | 3.2756 | 416 | 1.0751 | 0.6125 | 1.0751 | 1.0369 |
| No log | 3.2913 | 418 | 1.0576 | 0.6040 | 1.0576 | 1.0284 |
| No log | 3.3071 | 420 | 0.9453 | 0.6567 | 0.9453 | 0.9723 |
| No log | 3.3228 | 422 | 0.8648 | 0.6667 | 0.8648 | 0.9300 |
| No log | 3.3386 | 424 | 0.8193 | 0.6912 | 0.8193 | 0.9052 |
| No log | 3.3543 | 426 | 0.7927 | 0.6912 | 0.7927 | 0.8904 |
| No log | 3.3701 | 428 | 0.7872 | 0.7172 | 0.7872 | 0.8873 |
| No log | 3.3858 | 430 | 0.7832 | 0.7143 | 0.7832 | 0.8850 |
| No log | 3.4016 | 432 | 0.7644 | 0.7375 | 0.7644 | 0.8743 |
| No log | 3.4173 | 434 | 0.6831 | 0.7403 | 0.6831 | 0.8265 |
| No log | 3.4331 | 436 | 0.6639 | 0.7194 | 0.6639 | 0.8148 |
| No log | 3.4488 | 438 | 0.7031 | 0.7194 | 0.7031 | 0.8385 |
| No log | 3.4646 | 440 | 0.7679 | 0.6866 | 0.7679 | 0.8763 |
| No log | 3.4803 | 442 | 0.8364 | 0.6567 | 0.8364 | 0.9146 |
| No log | 3.4961 | 444 | 0.8766 | 0.6412 | 0.8766 | 0.9363 |
| No log | 3.5118 | 446 | 0.8422 | 0.6617 | 0.8422 | 0.9177 |
| No log | 3.5276 | 448 | 0.8323 | 0.6617 | 0.8323 | 0.9123 |
| No log | 3.5433 | 450 | 0.8242 | 0.6866 | 0.8242 | 0.9079 |
| No log | 3.5591 | 452 | 0.8644 | 0.6715 | 0.8644 | 0.9298 |
| No log | 3.5748 | 454 | 0.9000 | 0.6569 | 0.9000 | 0.9487 |
| No log | 3.5906 | 456 | 0.9527 | 0.5970 | 0.9527 | 0.9761 |
| No log | 3.6063 | 458 | 0.9681 | 0.6165 | 0.9681 | 0.9839 |
| No log | 3.6220 | 460 | 0.9238 | 0.6466 | 0.9238 | 0.9611 |
| No log | 3.6378 | 462 | 0.9674 | 0.6107 | 0.9674 | 0.9836 |
| No log | 3.6535 | 464 | 0.9909 | 0.6107 | 0.9909 | 0.9954 |
| No log | 3.6693 | 466 | 0.9741 | 0.6107 | 0.9741 | 0.9870 |
| No log | 3.6850 | 468 | 0.9336 | 0.6165 | 0.9336 | 0.9662 |
| No log | 3.7008 | 470 | 0.9062 | 0.6309 | 0.9062 | 0.9520 |
| No log | 3.7165 | 472 | 1.0247 | 0.6932 | 1.0247 | 1.0123 |
| No log | 3.7323 | 474 | 1.0486 | 0.6780 | 1.0486 | 1.0240 |
| No log | 3.7480 | 476 | 0.9794 | 0.6506 | 0.9794 | 0.9897 |
| No log | 3.7638 | 478 | 0.8504 | 0.6857 | 0.8504 | 0.9222 |
| No log | 3.7795 | 480 | 0.7795 | 0.6912 | 0.7795 | 0.8829 |
| No log | 3.7953 | 482 | 0.7996 | 0.7111 | 0.7996 | 0.8942 |
| No log | 3.8110 | 484 | 0.8337 | 0.6515 | 0.8337 | 0.9131 |
| No log | 3.8268 | 486 | 0.8644 | 0.6370 | 0.8644 | 0.9297 |
| No log | 3.8425 | 488 | 0.8662 | 0.6571 | 0.8662 | 0.9307 |
| No log | 3.8583 | 490 | 0.8280 | 0.6619 | 0.8280 | 0.9100 |
| No log | 3.8740 | 492 | 0.7695 | 0.6963 | 0.7695 | 0.8772 |
| No log | 3.8898 | 494 | 0.7183 | 0.7111 | 0.7183 | 0.8475 |
| No log | 3.9055 | 496 | 0.6885 | 0.7429 | 0.6885 | 0.8298 |
| No log | 3.9213 | 498 | 0.7038 | 0.7518 | 0.7038 | 0.8389 |
| 0.4568 | 3.9370 | 500 | 0.8139 | 0.725 | 0.8139 | 0.9022 |
| 0.4568 | 3.9528 | 502 | 0.8635 | 0.6795 | 0.8635 | 0.9293 |
| 0.4568 | 3.9685 | 504 | 0.8259 | 0.6715 | 0.8259 | 0.9088 |
| 0.4568 | 3.9843 | 506 | 0.7923 | 0.6462 | 0.7923 | 0.8901 |
| 0.4568 | 4.0 | 508 | 0.8023 | 0.6615 | 0.8023 | 0.8957 |
| 0.4568 | 4.0157 | 510 | 0.8322 | 0.6412 | 0.8322 | 0.9123 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
sb3/tqc-FetchSlide-v1 | sb3 | "2022-10-11T15:19:49Z" | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"FetchSlide-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2022-06-02T20:56:18Z" | ---
library_name: stable-baselines3
tags:
- FetchSlide-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- metrics:
- type: mean_reward
value: -29.00 +/- 9.66
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FetchSlide-v1
type: FetchSlide-v1
---
# **TQC** Agent playing **FetchSlide-v1**
This is a trained model of a **TQC** agent playing **FetchSlide-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env FetchSlide-v1 -orga sb3 -f logs/
python enjoy.py --algo tqc --env FetchSlide-v1 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo tqc --env FetchSlide-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env FetchSlide-v1 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 512),
('buffer_size', 1000000),
('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'),
('gamma', 0.98),
('learning_rate', 0.001),
('n_timesteps', 3000000.0),
('policy', 'MultiInputPolicy'),
('policy_kwargs', 'dict(net_arch=[512, 512, 512], n_critics=2)'),
('replay_buffer_class', 'HerReplayBuffer'),
('replay_buffer_kwargs',
"dict( online_sampling=True, goal_selection_strategy='future', "
'n_sampled_goal=4, max_episode_length=100 )'),
('tau', 0.005),
('normalize', False)])
```
|
Alvenir/wav2vec2-base-da-ft-nst | Alvenir | "2022-03-17T16:16:12Z" | 13 | 3 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"speech-to-text",
"da",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-03-15T08:16:18Z" | ---
language: da
tags:
- speech-to-text
license: apache-2.0
---
# wav2vec2-base-da-ft-nst
This the [alvenir wav2vec2 model](https://huggingface.co/Alvenir/wav2vec2-base-da) for Danish ASR finetuned by Alvenir on the public NST dataset. The model is trained on 16kHz, so make sure your data is the same sample rate.
The model was trained using fairseq and then converted to huggingface/transformers format.
Alvenir is always happy to help with your own open-source ASR projects, customized domain specializations or premium models. ;-)
## Usage
```Python
import soundfile as sf
import torch
from transformers import Wav2Vec2CTCTokenizer, Wav2Vec2Tokenizer, Wav2Vec2Processor, \
Wav2Vec2ForCTC
def get_tokenizer(model_path: str) -> Wav2Vec2CTCTokenizer:
return Wav2Vec2Tokenizer.from_pretrained(model_path)
def get_processor(model_path: str) -> Wav2Vec2Processor:
return Wav2Vec2Processor.from_pretrained(model_path)
def load_model(model_path: str) -> Wav2Vec2ForCTC:
return Wav2Vec2ForCTC.from_pretrained(model_path)
model_id = "Alvenir/wav2vec2-base-da-ft-nst"
model = load_model(model_id)
model.eval()
tokenizer = get_tokenizer(model_id)
processor = get_processor(model_id)
audio_file = "<path/to/audio.wav>"
audio, _ = sf.read(audio_file)
input_values = processor(audio, return_tensors="pt", padding="longest", sampling_rate=16_000).input_values
with torch.no_grad():
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
print(transcription)
```
## Benchmark results
This is some benchmark results on the public available datasets in Danish.
| Dataset | WER Greedy | WER with 3-gram Language Model |
|---------------------|------------|--------------------|
| NST test | 15,8% | 11.9% |
| alvenir-asr-da-eval | 19.0% | 12.1% |
| common_voice_80 da test | 26,3% | 19,2% |
|
RichardErkhov/curiositytech_-_MARS-v0.2-8bits | RichardErkhov | "2025-03-31T04:27:56Z" | 0 | 0 | null | [
"safetensors",
"llama",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-03-31T04:21:13Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
MARS-v0.2 - bnb 8bits
- Model creator: https://huggingface.co/curiositytech/
- Original model: https://huggingface.co/curiositytech/MARS-v0.2/
Original model description:
---
license: llama3
language:
- tr
- en
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
model-index:
- name: MARS
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge TR v0.2
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc
value: 43.85
name: accuracy
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag TR
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc
value: 46.64
name: accuracy
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA TR v0.2
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: acc
name: accuracy
value: 48.66
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande TR v0.2
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 52.84
name: accuracy
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k TR v0.2
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 59.30
name: accuracy
pipeline_tag: text-generation
---
<img src="MARS-2.0.png" alt="Curiosity MARS model logo" style="border-radius: 1rem; width: 100%">
<div style="display: flex; justify-content: center; align-items: center; flex-direction: column">
<h1 style="font-size: 5em; margin-bottom: 0; padding-bottom: 0;">MARS-v0.2</h1>
<aside>by <a href="https://curiosity.tech">Curiosity Technology</a></aside>
</div>
MARS-v0.2 is the second iteration of Curiosity Technology models, built on the foundation of Llama 3.1 8B. This version expands upon the initial MARS model by fine-tuning it with a more comprehensive dataset, with an increased emphasis on mathematical data to enhance its reasoning and problem-solving capabilities.
We've continued our commitment to Turkish language processing, utilizing both in-house Turkish datasets and a broader selection of translated open-source datasets. We believe this version will serve the community with even more versatility and depth.
MARS have been trained for 3 days on 4xA100.
## Model Details
- **Base Model**: Meta Llama 3.1 8B Instruct
- **Training Dataset**: In-house & Translated Open Source Turkish Datasets
- **Training Method**: LoRA Fine Tuning
## How to use
You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both.
### Transformers pipeline
```python
import transformers
import torch
model_id = "curiositytech/MARS-v0.2"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "Sen korsan gibi konuşan bir korsan chatbotsun!"},
{"role": "user", "content": "Sen kimsin?"},
]
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
messages,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][-1])
```
### Transformers AutoModelForCausalLM
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "curiositytech/MARS-v0.2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "Sen korsan gibi konuşan bir korsan chatbotsun!"},
{"role": "user", "content": "Sen kimsin?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
|
visdata/btz18 | visdata | "2024-12-30T07:50:36Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-30T07:46:30Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yyyy1992/my_disflu_chinese_model | yyyy1992 | "2023-08-16T02:21:47Z" | 115 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-chinese",
"base_model:finetune:google-bert/bert-base-chinese",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-08-11T07:34:06Z" | ---
base_model: bert-base-chinese
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_disflu_chinese_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_disflu_chinese_model
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2753
- Accuracy: 0.9154
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 278 | 0.2357 | 0.9100 |
| 0.258 | 2.0 | 556 | 0.2753 | 0.9154 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1
- Datasets 2.11.0
- Tokenizers 0.13.3
|
haaafizumarashraf/latest-llama3.2-text-to-yaml | haaafizumarashraf | "2025-03-24T09:53:51Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-24T09:34:20Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
FatimaK6/breast_cancer_convnext_large | FatimaK6 | "2025-03-25T13:13:14Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"convnext",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2025-03-25T13:12:17Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jayanthspratap/mobilenet_v2_1.0_224-cxr-view | jayanthspratap | "2023-08-19T21:33:12Z" | 196 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mobilenet_v2",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:other",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-08-19T00:28:09Z" | ---
license: other
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: mobilenet_v2_1.0_224-cxr-view
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.929384965831435
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilenet_v2_1.0_224-cxr-view
This model is a fine-tuned version of [google/mobilenet_v2_1.0_224](https://huggingface.co/google/mobilenet_v2_1.0_224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2278
- Accuracy: 0.9294
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7049 | 1.0 | 109 | 0.6746 | 0.7449 |
| 0.6565 | 2.0 | 219 | 0.6498 | 0.6743 |
| 0.5699 | 3.0 | 328 | 0.5730 | 0.7995 |
| 0.5702 | 4.0 | 438 | 0.5119 | 0.8087 |
| 0.4849 | 5.0 | 547 | 0.4356 | 0.8679 |
| 0.356 | 6.0 | 657 | 0.4641 | 0.8087 |
| 0.3713 | 7.0 | 766 | 0.3407 | 0.8679 |
| 0.4571 | 8.0 | 876 | 0.4896 | 0.7813 |
| 0.3896 | 9.0 | 985 | 0.3124 | 0.8884 |
| 0.3422 | 10.0 | 1095 | 0.2791 | 0.9271 |
| 0.3358 | 11.0 | 1204 | 0.3998 | 0.8246 |
| 0.3658 | 12.0 | 1314 | 0.2716 | 0.9066 |
| 0.4547 | 13.0 | 1423 | 0.5828 | 0.7973 |
| 0.2615 | 14.0 | 1533 | 0.3446 | 0.8542 |
| 0.377 | 15.0 | 1642 | 0.6322 | 0.7312 |
| 0.2846 | 16.0 | 1752 | 0.2621 | 0.9248 |
| 0.3433 | 17.0 | 1861 | 0.3709 | 0.8383 |
| 0.2851 | 18.0 | 1971 | 0.8134 | 0.7312 |
| 0.2298 | 19.0 | 2080 | 0.4324 | 0.8314 |
| 0.3916 | 20.0 | 2190 | 0.3631 | 0.8360 |
| 0.3049 | 21.0 | 2299 | 0.3405 | 0.8633 |
| 0.3068 | 22.0 | 2409 | 0.2585 | 0.9021 |
| 0.3091 | 23.0 | 2518 | 0.2278 | 0.9294 |
| 0.2749 | 24.0 | 2628 | 0.2963 | 0.9043 |
| 0.3543 | 25.0 | 2737 | 0.2637 | 0.8975 |
| 0.3024 | 26.0 | 2847 | 0.2966 | 0.8998 |
| 0.2593 | 27.0 | 2956 | 0.3842 | 0.8542 |
| 0.1979 | 28.0 | 3066 | 0.2711 | 0.8884 |
| 0.2549 | 29.0 | 3175 | 0.3145 | 0.8633 |
| 0.3216 | 29.86 | 3270 | 0.4565 | 0.8155 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
daze-unlv/google-bigbird-roberta-base | daze-unlv | "2024-03-12T22:47:33Z" | 89 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"big_bird",
"multiple-choice",
"generated_from_trainer",
"base_model:google/bigbird-roberta-base",
"base_model:finetune:google/bigbird-roberta-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | multiple-choice | "2024-03-12T09:57:12Z" | ---
license: apache-2.0
base_model: google/bigbird-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: google-bigbird-roberta-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# google-bigbird-roberta-base
This model is a fine-tuned version of [google/bigbird-roberta-base](https://huggingface.co/google/bigbird-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3863
- Accuracy: 0.2474
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3882 | 1.0 | 2857 | 1.3863 | 0.2692 |
| 1.3869 | 2.0 | 5714 | 1.3863 | 0.2472 |
| 1.3865 | 3.0 | 8571 | 1.3863 | 0.2474 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.2.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.0
|
linzsrivas/porns-ai-generator | linzsrivas | "2025-03-06T17:00:17Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2025-03-06T16:58:36Z" | ---
license: mit
---
# 7 Best AI Porn Generators Of 2025
The world of adult content has been revolutionized by artificial intelligence, with AI porn generators pushing the boundaries of realism and creativity. As we step into 2025, these tools have become more advanced, accessible, and controversial than ever. Whether you're curious about the technology or exploring its possibilities, we’ve rounded up the 7 best AI porn generators of 2025—showcasing the cutting-edge tools shaping this evolving industry.
## 1. Seduced.ai
### Why I Recommend Seduced.ai
Seduced.ai stands out as the best AI porn generator available today. It offers a unique blend of user-friendliness and extensive customization options, making it accessible for everyone, regardless of technical expertise. The platform allows users to explore their fantasies and create personalized content effortlessly.
⏩⏩⏩[**Try Seduced.ai For Free**](https://sussap.net/h88f)

### Key Features
Extensive Fetish Support: Seduced.ai covers a wide range of fetishes, allowing users to generate content that caters to their specific desires.
Video Generation: Users can create short porn videos of up to 6 seconds, combining multiple sequences for a seamless experience.
Character Reusability: The platform allows users to save and reuse previously generated characters, enhancing creativity and continuity in content creation.
High-Quality Output: Seduced.ai provides options for upscaling images, ensuring that the generated content is not only unique but also visually appealing.
### My Experience
Using Seduced.ai has been a delightful experience. The interface is intuitive, making it easy to navigate through various options. I was able to generate high-quality images and videos quickly, which exceeded my expectations. The customization options allowed me to explore different scenarios and characters effortlessly.
### Pros
Easy to use, with no technical skills required.
Offers a vast array of extensions for unique content creation.
### Cons
Some features may require a subscription for full access.
⏩⏩⏩[**Try Seduced.ai For Free**](https://sussap.net/h88f)
## 2. Pornx.ai
Pornx.ai is a revolutionary platform that allows users to create stunning AI-generated adult content tailored to their fantasies. With its user-friendly interface and advanced features, it stands out as the best AI porn generator available today. I highly recommend it for anyone looking to explore their creativity in a safe and imaginative environment.
⏩⏩⏩[**Try Pornx.ai For Free**](https://sussap.net/9gfc)
### Why I Recommend It
Pornx.ai offers an unparalleled experience for users who wish to bring their fantasies to life. The platform's innovative tools and features make it easy to customize and generate unique content, ensuring that every user can create something truly special.
### Key Features
AI Image Generator: Create personalized images by selecting models, body types, and backgrounds.
Quality Mode: Enhance your images with options for Base, High, and Ultra quality settings.
Custom Pose: Transfer character poses from your images to generated content effortlessly.
In Paint: Modify specific areas of your images to achieve the desired look.
### My Experience
Using Pornx.ai has been an exciting journey. The intuitive design made it easy to navigate, and the results were impressive. I was able to create visuals that perfectly matched my imagination, making the experience both enjoyable and fulfilling.
### Pros
Extensive customization options allow for limitless creativity.
High-quality output enhances the overall visual experience.
### Cons
Some features may require a paid subscription for full access.
⏩⏩⏩[**Try Pornx.ai For Free**](https://sussap.net/9gfc)
## 3. Porngen.art
PornGen.art is a revolutionary platform that utilizes advanced artificial intelligence to create highly realistic and customizable pornographic images. This AI porn generator allows users to bring their fantasies to life, whether it's a dream character or a specific scenario. With its user-friendly interface and powerful algorithms, PornGen.art stands out as one of the best options available in the market.
### Why I Recommend It
PornGen.art is not just about generating images; it’s about creating personalized experiences. The platform prioritizes user privacy and offers a variety of customization options, making it a top choice for those looking to explore their fantasies safely and creatively.
### Key Features
Realistic Image Generation: Utilizes deep learning algorithms to create lifelike images.
Customizable Options: Users can adjust body type, hair, ethnicity, and more to fit their desires.
Privacy Protection: All uploaded images are confidential and deleted within 48 hours.
Multiple Styles: Explore various genres, including hentai, anime, and furry art.
### My Experience
Using PornGen.art has been an exciting journey. The ease of uploading images and the speed of generation amazed me. The results were impressive, and I appreciated the level of customization available.
### Pros
High-quality, realistic images that cater to diverse preferences.
Strong emphasis on user privacy and data security.
### Cons
Results can vary significantly based on the quality of the uploaded images.
## 4. Pornjourney.ai
PornJourney.ai stands out as the best AI porn generator available today, offering users an unparalleled experience in creating customized adult content. I recommend it for its advanced technology, user-friendly interface, and commitment to privacy and security. The platform allows users to generate images that cater to their specific preferences, making it a favorite among enthusiasts.
### Key Features
Fast Generation: Dedicated server clusters ensure quick image creation for premium users.
'Keep This Girl' Feature: Retain and modify the features of your favorite AI-generated characters.
Image Library: Save images and their metadata for easy access and modifications.
Privacy Protection: All images are encrypted, ensuring user data remains secure and private.
### My Experience
Using PornJourney.ai has been a delightful experience. The image generation process is seamless, and the results are incredibly realistic. I appreciate the variety of customization options available, allowing me to create characters that truly match my preferences.
### Pros
Exceptional realism and detail in generated images.
Regular updates with new features and content every weekend.
### Cons
AI porn videos are still in beta, which may lead to occasional instability.
## 5. Pornjoy.ai
PornJoy.ai stands out as the premier AI porn generator, offering users an innovative platform to create and customize adult content effortlessly. I recommend it for its user-friendly interface and extensive customization options that cater to a wide range of fantasies.
### Why I Recommend It
PornJoy.ai provides a unique blend of creativity and privacy, allowing users to explore their desires in a safe environment. The platform's advanced AI technology ensures high-quality images that truly reflect individual preferences.
### Key Features
AI Porn Generator: Create personalized porn images by selecting body types, skin tones, hairstyles, and outfits.
AI Porn Chat: Engage in steamy conversations with customizable AI characters, enhancing the interactive experience.
AI Hentai Generator: Quickly generate unique hentai images tailored to your specific desires.
Undress AI Generator: Transform dressed images into AI nudes, allowing for creative modifications and adjustments.
### My Experience
Using PornJoy.ai has been a delightful experience. The intuitive design made it easy to navigate, and the variety of customization options allowed me to create images that perfectly matched my fantasies.
### Pros
High-quality, realistic AI-generated images.
Strong emphasis on user privacy and data protection.
### Cons
Some features may require a learning curve for new users.
## 6. Pornpen.ai
### Why I Recommend It
I recommend Pornpen.ai for its ability to generate high-quality, personalized adult content that caters to diverse tastes. The user-friendly interface and impressive customization options make it accessible for everyone, regardless of their experience level.
### Key Features
Customizable Content: Users can specify their preferences, ensuring the generated content aligns with their desires.
High-Quality Graphics: The platform produces visually appealing images and videos that enhance the overall experience.
Privacy Protection: Pornpen.ai prioritizes user privacy, ensuring that all interactions remain confidential.
Regular Updates: The platform frequently updates its algorithms to improve content quality and user experience.
### My Experience
My experience with Pornpen.ai has been overwhelmingly positive. The platform is easy to navigate, and I was impressed by the quality of the generated content. The customization options allowed me to explore various themes, making it a fun and engaging experience.
### Pros
Innovative Technology: The AI behind Pornpen.ai is cutting-edge, producing unique content that is hard to find elsewhere.
User-Friendly Interface: The platform is designed for ease of use, making it accessible for all users.
### Cons
One downside is that the generated content may not always meet expectations, as it relies on algorithms that can sometimes produce unexpected results.
## 7. Candy.ai
### Why I Recommend It
Candy.ai is highly recommended for its ability to blend intimacy, creativity, and personalization. Users can explore various fantasies and customize their AI girlfriend to meet their desires, ensuring a fulfilling experience.
### Key Features
Customizable AI Girlfriend: Users can design their girlfriend's body type, personality, and clothing, creating a truly unique companion.
Interactive Experience: The AI girlfriend listens, responds quickly, and can even follow photo requests, making interactions feel genuine.
Privacy and Security: Candy.ai prioritizes user privacy with state-of-the-art secure data storage, ensuring all interactions remain confidential.
Endless Possibilities: Users can explore various scenarios, from romantic chats to intense AI sexting, catering to all preferences.
### My Experience
Using Candy.ai has been an enjoyable journey. The customization options allowed me to create a girlfriend that truly resonates with my desires. The interactions felt real, and I appreciated the privacy measures in place.
### Pros
Highly customizable experience tailored to individual preferences.
Strong emphasis on user privacy and data security.
### Cons
Some users may find the AI's responses occasionally lack depth.
## Frequently Asked Questions (FAQS)
### 1. What is AI porn?
AI porn refers to adult content created or enhanced using artificial intelligence technologies. This can include generating realistic images, videos, or deepfakes of individuals, often without their consent. AI porn leverages machine learning algorithms to manipulate or create explicit content that can appear highly authentic.
### 2. How does AI porn work?
AI porn typically relies on deep learning techniques, such as Generative Adversarial Networks (GANs) or diffusion models. These algorithms are trained on large datasets of images and videos to learn patterns and generate new content. For example:
Deepfakes: AI swaps faces in existing videos to make it appear as though someone is performing in a pornographic video.
Image generation: AI creates entirely synthetic images or videos of people who may not exist.
Enhancement: AI improves the quality of existing content, making it more realistic.
### 3. Can AI porn generators create realistic content?
Yes, AI porn generators can create highly realistic content. Advances in AI technology, particularly with GANs and diffusion models, have made it possible to produce images and videos that are nearly indistinguishable from real footage. However, the quality depends on the sophistication of the AI model and the data it was trained on.
### 4. Are there ethical and privacy concerns regarding AI porn?
Yes, AI porn raises significant ethical and privacy concerns:
Non-consensual content: Many AI porn creations involve using someone's likeness without their permission, which is a violation of privacy and consent.
Misuse and exploitation: AI porn can be used for harassment, revenge porn, or blackmail, causing emotional and psychological harm to victims.
Legal gray areas: Laws around AI-generated explicit content are still evolving, making it difficult to regulate or hold perpetrators accountable.
Impact on society: The proliferation of AI porn could normalize non-consensual content and contribute to the objectification of individuals.
|
mradermacher/Mahou-1.4-llama3-8B-i1-GGUF | mradermacher | "2024-12-16T02:54:54Z" | 83 | 1 | transformers | [
"transformers",
"gguf",
"en",
"dataset:flammenai/MahouMix-v1",
"dataset:flammenai/FlameMix-DPO-v1",
"base_model:flammenai/Mahou-1.3a-llama3-8B",
"base_model:quantized:flammenai/Mahou-1.3a-llama3-8B",
"license:llama3",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-05-30T18:48:36Z" | ---
base_model: flammenai/Mahou-1.3a-llama3-8B
datasets:
- flammenai/MahouMix-v1
- flammenai/FlameMix-DPO-v1
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/flammenai/Mahou-1.3a-llama3-8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Mahou-1.4-llama3-8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.4-llama3-8B-i1-GGUF/resolve/main/Mahou-1.4-llama3-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.4-llama3-8B-i1-GGUF/resolve/main/Mahou-1.4-llama3-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.4-llama3-8B-i1-GGUF/resolve/main/Mahou-1.4-llama3-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.4-llama3-8B-i1-GGUF/resolve/main/Mahou-1.4-llama3-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.4-llama3-8B-i1-GGUF/resolve/main/Mahou-1.4-llama3-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.4-llama3-8B-i1-GGUF/resolve/main/Mahou-1.4-llama3-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.4-llama3-8B-i1-GGUF/resolve/main/Mahou-1.4-llama3-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.4-llama3-8B-i1-GGUF/resolve/main/Mahou-1.4-llama3-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.4-llama3-8B-i1-GGUF/resolve/main/Mahou-1.4-llama3-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.4-llama3-8B-i1-GGUF/resolve/main/Mahou-1.4-llama3-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.4-llama3-8B-i1-GGUF/resolve/main/Mahou-1.4-llama3-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.4-llama3-8B-i1-GGUF/resolve/main/Mahou-1.4-llama3-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.4-llama3-8B-i1-GGUF/resolve/main/Mahou-1.4-llama3-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.4-llama3-8B-i1-GGUF/resolve/main/Mahou-1.4-llama3-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.4-llama3-8B-i1-GGUF/resolve/main/Mahou-1.4-llama3-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.4-llama3-8B-i1-GGUF/resolve/main/Mahou-1.4-llama3-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.4-llama3-8B-i1-GGUF/resolve/main/Mahou-1.4-llama3-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.4-llama3-8B-i1-GGUF/resolve/main/Mahou-1.4-llama3-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.4-llama3-8B-i1-GGUF/resolve/main/Mahou-1.4-llama3-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.4-llama3-8B-i1-GGUF/resolve/main/Mahou-1.4-llama3-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mahou-1.4-llama3-8B-i1-GGUF/resolve/main/Mahou-1.4-llama3-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
whatsupbr0/0b41c577-b58d-4cd6-8dd1-09cd0a9ce6d9 | whatsupbr0 | "2025-04-10T13:32:07Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-10T13:00:56Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
LarryAIDraw/runa_shirakawa | LarryAIDraw | "2023-12-12T17:32:42Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-12-12T17:29:34Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/228840/runa-shirakawa-keikenzumi-na-kimi-to-keiken-zero-na-ore-ga-otsukiai-suru-hanashi |
albertus-sussex/veriscrape-simcse-auto-reference_2_to_verify_8-fold-7 | albertus-sussex | "2025-03-26T11:44:04Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2025-03-26T11:43:33Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
strangervb/Llama-2-70B-Chat-GPTQ-2 | strangervb | "2024-03-25T21:03:11Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-2",
"en",
"arxiv:2307.09288",
"base_model:meta-llama/Llama-2-70b-chat-hf",
"base_model:quantized:meta-llama/Llama-2-70b-chat-hf",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2024-03-22T04:52:44Z" | ---
base_model: meta-llama/Llama-2-70b-chat-hf
inference: false
language:
- en
license: llama2
model_creator: Meta Llama 2
model_name: Llama 2 70B Chat
model_type: llama
pipeline_tag: text-generation
prompt_template: '[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as
possible, while being safe. Your answers should not include any harmful, unethical,
racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses
are socially unbiased and positive in nature. If a question does not make any sense,
or is not factually coherent, explain why instead of answering something not correct.
If you don''t know the answer to a question, please don''t share false information.
<</SYS>>
{prompt}[/INST]
'
quantized_by: TheBloke
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama 2 70B Chat - GPTQ
- Model creator: [Meta Llama 2](https://huggingface.co/meta-llama)
- Original model: [Llama 2 70B Chat](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf)
<!-- description start -->
## Description
This repo contains GPTQ model files for [Meta Llama 2's Llama 2 70B Chat](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Llama-2-70B-chat-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-70B-chat-GGUF)
* [Meta Llama 2's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Llama-2-Chat
```
[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>>
{prompt}[/INST]
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The dataset used for quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/main) | 4 | None | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 35.33 GB | Yes | 4-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 40.66 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 36.65 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-3bit-64g-actorder_True](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/gptq-3bit-64g-actorder_True) | 3 | 64 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 29.30 GB | No | 3-bit, with group size 64g and act-order. |
| [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 28.03 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. |
| [gptq-3bit-128g-actorder_False](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/gptq-3bit-128g-actorder_False) | 3 | 128 | No | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 28.03 GB | No | 3-bit, with group size 128g but no act-order. Slightly higher VRAM requirements than 3-bit None. |
| [gptq-3bit--1g-actorder_True](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/gptq-3bit--1g-actorder_True) | 3 | None | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 26.78 GB | No | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 37.99 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download from branches
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/Llama-2-70B-chat-GPTQ:main`
- With Git, you can clone a branch with:
```
git clone --single-branch --branch main https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ
```
- In Python Transformers code, the branch is the `revision` parameter; see below.
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Llama-2-70B-chat-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Llama-2-70B-chat-GPTQ:main`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Llama-2-70B-chat-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.32.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers>=4.32.0 optimum>=1.12.0
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
pip3 install .
```
### For CodeLlama models only: you must use Transformers 4.33.0 or later.
If 4.33.0 is not yet released when you read this, you will need to install Transformers from source:
```shell
pip3 uninstall -y transformers
pip3 install git+https://github.com/huggingface/transformers.git
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Llama-2-70B-chat-GPTQ"
# To use a different branch, change revision
# For example: revision="main"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>>
{prompt}[/INST]
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Meta Llama 2's Llama 2 70B Chat
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 70B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)| |
Subsets and Splits